未验证 提交 619b1833 编写于 作者: S SunAhong1993 提交者: GitHub

Merge pull request #2 from PaddlePaddle/develop

oo
- repo: local - repo: https://github.com/PaddlePaddle/mirrors-yapf.git
sha: 0d79c0c469bab64f7229c9aca2b1186ef47f0e37
hooks: hooks:
- id: yapf - id: yapf
name: yapf
entry: yapf
language: system
args: [-i, --style .style.yapf]
files: \.py$ files: \.py$
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
sha: a11d9314b22d8f8c7556443875b731ef05965464 sha: a11d9314b22d8f8c7556443875b731ef05965464
hooks: hooks:
...@@ -18,6 +14,7 @@ ...@@ -18,6 +14,7 @@
- id: check-symlinks - id: check-symlinks
- id: check-added-large-files - id: check-added-large-files
- repo: local - repo: local
hooks: hooks:
- id: copyright_checker - id: copyright_checker
name: copyright_checker name: copyright_checker
......
language: python language: python
python: python:
- '2.7'
- '3.5' - '3.5'
- '3.6'
script: script:
- if [[ $TRAVIS_PYTHON_VERSION != 2.7 ]]; then /bin/bash ./tools/check_code_style.sh; fi - if [[ $TRAVIS_PYTHON_VERSION != 2.7 ]]; then /bin/bash ./tools/check_code_style.sh; fi
......
...@@ -44,10 +44,15 @@ x2paddle --framework=caffe --prototxt=deploy.prototxt --weight=deploy.caffemodel ...@@ -44,10 +44,15 @@ x2paddle --framework=caffe --prototxt=deploy.prototxt --weight=deploy.caffemodel
``` ```
x2paddle --framework=onnx --model=onnx_model.onnx --save_dir=pd_model x2paddle --framework=onnx --model=onnx_model.onnx --save_dir=pd_model
``` ```
### Paddle2ONNX
```
# 注意:paddle_infer_model_dir下需包含__model__和__params__两个文件
x2paddle --framework=paddle2onnx --model=paddle_infer_model_dir --save_dir=onnx_model
```
### 参数选项 ### 参数选项
| 参数 | | | 参数 | |
|----------|--------------| |----------|--------------|
|--framework | 源模型类型 (tensorflow、caffe、onnx) | |--framework | 源模型类型 (tensorflow、caffe、onnx、paddle2onnx) |
|--prototxt | 当framework为caffe时,该参数指定caffe模型的proto文件路径 | |--prototxt | 当framework为caffe时,该参数指定caffe模型的proto文件路径 |
|--weight | 当framework为caffe时,该参数指定caffe模型的参数文件路径 | |--weight | 当framework为caffe时,该参数指定caffe模型的参数文件路径 |
|--save_dir | 指定转换后的模型保存目录路径 | |--save_dir | 指定转换后的模型保存目录路径 |
......
...@@ -11,8 +11,7 @@ setuptools.setup( ...@@ -11,8 +11,7 @@ setuptools.setup(
version=x2paddle.__version__, version=x2paddle.__version__,
author="dltp-sz", author="dltp-sz",
author_email="dltp-sz@baidu.com", author_email="dltp-sz@baidu.com",
description= description="a toolkit for converting trained model to PaddlePaddle from other deep learning frameworks.",
"a toolkit for converting trained model to PaddlePaddle from other deep learning frameworks.",
long_description=long_description, long_description=long_description,
long_description_content_type="text/plain", long_description_content_type="text/plain",
url="https://github.com/PaddlePaddle/x2paddle", url="https://github.com/PaddlePaddle/x2paddle",
...@@ -23,6 +22,4 @@ setuptools.setup( ...@@ -23,6 +22,4 @@ setuptools.setup(
"Operating System :: OS Independent", "Operating System :: OS Independent",
], ],
license='Apache 2.0', license='Apache 2.0',
entry_points={'console_scripts': [ entry_points={'console_scripts': ['x2paddle=x2paddle.convert:main', ]})
'x2paddle=x2paddle.convert:main',
]})
...@@ -5,12 +5,14 @@ model_dir = sys.argv[1] ...@@ -5,12 +5,14 @@ model_dir = sys.argv[1]
new_model_dir = sys.argv[2] new_model_dir = sys.argv[2]
exe = fluid.Executor(fluid.CPUPlace()) exe = fluid.Executor(fluid.CPUPlace())
[inference_program, feed_target_names, [inference_program, feed_target_names,
fetch_targets] = fluid.io.load_inference_model(dirname=model_dir, executor=exe) fetch_targets] = fluid.io.load_inference_model(
dirname=model_dir, executor=exe)
print(feed_target_names) print(feed_target_names)
fluid.io.save_inference_model(dirname=new_model_dir, fluid.io.save_inference_model(
feeded_var_names=feed_target_names, dirname=new_model_dir,
target_vars=fetch_targets, feeded_var_names=feed_target_names,
executor=exe, target_vars=fetch_targets,
main_program=inference_program, executor=exe,
params_filename="__params__") main_program=inference_program,
params_filename="__params__")
__version__ = "0.7.1" __version__ = "0.7.4"
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. # Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
# #
# Licensed under the Apache License, Version 2.0 (the "License" # Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
...@@ -19,32 +19,37 @@ import sys ...@@ -19,32 +19,37 @@ import sys
def arg_parser(): def arg_parser():
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument("--model", parser.add_argument(
"-m", "--model",
type=_text_type, "-m",
default=None, type=_text_type,
help="define model file path for tensorflow or onnx") default=None,
parser.add_argument("--prototxt", help="define model file path for tensorflow or onnx")
"-p", parser.add_argument(
type=_text_type, "--prototxt",
default=None, "-p",
help="prototxt file of caffe model") type=_text_type,
parser.add_argument("--weight", default=None,
"-w", help="prototxt file of caffe model")
type=_text_type, parser.add_argument(
default=None, "--weight",
help="weight file of caffe model") "-w",
parser.add_argument("--save_dir", type=_text_type,
"-s", default=None,
type=_text_type, help="weight file of caffe model")
default=None, parser.add_argument(
help="path to save translated model") "--save_dir",
"-s",
type=_text_type,
default=None,
help="path to save translated model")
parser.add_argument( parser.add_argument(
"--framework", "--framework",
"-f", "-f",
type=_text_type, type=_text_type,
default=None, default=None,
help="define which deeplearning framework(tensorflow/caffe/onnx)") help="define which deeplearning framework(tensorflow/caffe/onnx/paddle2onnx)"
)
parser.add_argument( parser.add_argument(
"--caffe_proto", "--caffe_proto",
"-c", "-c",
...@@ -52,27 +57,30 @@ def arg_parser(): ...@@ -52,27 +57,30 @@ def arg_parser():
default=None, default=None,
help="optional: the .py file compiled by caffe proto file of caffe model" help="optional: the .py file compiled by caffe proto file of caffe model"
) )
parser.add_argument("--version", parser.add_argument(
"-v", "--version",
action="store_true", "-v",
default=False, action="store_true",
help="get version of x2paddle") default=False,
help="get version of x2paddle")
parser.add_argument( parser.add_argument(
"--without_data_format_optimization", "--without_data_format_optimization",
"-wo", "-wo",
action="store_true", action="store_true",
default=False, default=False,
help="tf model conversion without data format optimization") help="tf model conversion without data format optimization")
parser.add_argument("--define_input_shape", parser.add_argument(
"-d", "--define_input_shape",
action="store_true", "-d",
default=False, action="store_true",
help="define input shape for tf model") default=False,
parser.add_argument("--params_merge", help="define input shape for tf model")
"-pm", parser.add_argument(
action="store_true", "--params_merge",
default=False, "-pm",
help="define whether merge the params") action="store_true",
default=False,
help="define whether merge the params")
return parser return parser
...@@ -117,7 +125,6 @@ def tf2paddle(model_path, ...@@ -117,7 +125,6 @@ def tf2paddle(model_path,
optimizer.merge_bias() optimizer.merge_bias()
optimizer.optimize_sub_graph() optimizer.optimize_sub_graph()
# optimizer.merge_batch_norm() # optimizer.merge_batch_norm()
# optimizer.merge_prelu() # optimizer.merge_prelu()
else: else:
...@@ -177,6 +184,14 @@ def onnx2paddle(model_path, save_dir, params_merge=False): ...@@ -177,6 +184,14 @@ def onnx2paddle(model_path, save_dir, params_merge=False):
mapper.save_inference_model(save_dir, params_merge) mapper.save_inference_model(save_dir, params_merge)
def paddle2onnx(model_path, save_dir):
from x2paddle.decoder.paddle_decoder import PaddleDecoder
from x2paddle.op_mapper.paddle_op_mapper import PaddleOpMapper
model = PaddleDecoder(model_path, '__model__', '__params__')
mapper = PaddleOpMapper()
mapper.convert(model.program, save_dir)
def main(): def main():
if len(sys.argv) < 2: if len(sys.argv) < 2:
print("Use \"x2paddle -h\" to print the help information") print("Use \"x2paddle -h\" to print the help information")
...@@ -249,8 +264,14 @@ def main(): ...@@ -249,8 +264,14 @@ def main():
if args.params_merge: if args.params_merge:
params_merge = True params_merge = True
onnx2paddle(args.model, args.save_dir, params_merge) onnx2paddle(args.model, args.save_dir, params_merge)
elif args.framework == "paddle2onnx":
assert args.model is not None, "--model should be defined while translating paddle model to onnx"
paddle2onnx(args.model, args.save_dir)
else: else:
raise Exception("--framework only support tensorflow/caffe/onnx now") raise Exception(
"--framework only support tensorflow/caffe/onnx/paddle2onnx now")
if __name__ == "__main__": if __name__ == "__main__":
......
...@@ -46,8 +46,9 @@ class Layer(object): ...@@ -46,8 +46,9 @@ class Layer(object):
for input in self.inputs: for input in self.inputs:
if isinstance(input, GraphNode): if isinstance(input, GraphNode):
if hasattr(input, "index"): if hasattr(input, "index"):
in_list += (input.layer_name + in_list += (
"[{}]".format(input.index) + ", ") input.layer_name + "[{}]".format(input.index) + ", "
)
else: else:
in_list += (input.layer_name + ", ") in_list += (input.layer_name + ", ")
elif isinstance(input, six.string_types): elif isinstance(input, six.string_types):
...@@ -71,8 +72,8 @@ class Layer(object): ...@@ -71,8 +72,8 @@ class Layer(object):
layer_code = layer_code + key + "={}, ".format(input) layer_code = layer_code + key + "={}, ".format(input)
elif isinstance(self.inputs, GraphNode): elif isinstance(self.inputs, GraphNode):
if hasattr(self.inputs, "index"): if hasattr(self.inputs, "index"):
layer_code += (self.inputs.layer_name + layer_code += (
"[{}]".format(self.inputs.index)) self.inputs.layer_name + "[{}]".format(self.inputs.index))
else: else:
layer_code += (self.inputs.layer_name) layer_code += (self.inputs.layer_name)
if self.op != "=": if self.op != "=":
...@@ -88,6 +89,8 @@ class Layer(object): ...@@ -88,6 +89,8 @@ class Layer(object):
for key, value in param_attr.items(): for key, value in param_attr.items():
if '\n' in str(value): if '\n' in str(value):
value = string(str(value).replace('\n', ',')) value = string(str(value).replace('\n', ','))
if str(key) == 'attr':
value = 'ParamAttr(' + str(value) + ')'
layer_code = layer_code + key + "={}, ".format(value) layer_code = layer_code + key + "={}, ".format(value)
layer_code = layer_code.strip(", ") layer_code = layer_code.strip(", ")
......
...@@ -64,10 +64,8 @@ def run_net(param_dir="./"): ...@@ -64,10 +64,8 @@ def run_net(param_dir="./"):
b = os.path.exists(os.path.join(param_dir, var.name)) b = os.path.exists(os.path.join(param_dir, var.name))
return b return b
fluid.io.load_vars(exe, fluid.io.load_vars(
param_dir, exe, param_dir, fluid.default_main_program(), predicate=if_exist)
fluid.default_main_program(),
predicate=if_exist)
class OpMapper(object): class OpMapper(object):
...@@ -98,8 +96,8 @@ class OpMapper(object): ...@@ -98,8 +96,8 @@ class OpMapper(object):
def add_codes(self, codes, indent=0): def add_codes(self, codes, indent=0):
if isinstance(codes, list): if isinstance(codes, list):
for code in codes: for code in codes:
self.paddle_codes += (self.tab * indent + code.strip('\n') + self.paddle_codes += (
'\n') self.tab * indent + code.strip('\n') + '\n')
elif isinstance(codes, str): elif isinstance(codes, str):
self.paddle_codes += (self.tab * indent + codes.strip('\n') + '\n') self.paddle_codes += (self.tab * indent + codes.strip('\n') + '\n')
else: else:
...@@ -135,24 +133,25 @@ class OpMapper(object): ...@@ -135,24 +133,25 @@ class OpMapper(object):
os.path.join(os.path.join(py_code_dir, var.name))) os.path.join(os.path.join(py_code_dir, var.name)))
return b return b
fluid.io.load_vars(exe, fluid.io.load_vars(
py_code_dir, exe,
fluid.default_main_program(), py_code_dir,
predicate=if_exist) fluid.default_main_program(),
predicate=if_exist)
if params_merge: if params_merge:
fluid.io.save_inference_model(dirname=os.path.join( fluid.io.save_inference_model(
save_dir, "inference_model"), dirname=os.path.join(save_dir, "inference_model"),
feeded_var_names=input_names, feeded_var_names=input_names,
target_vars=outputs, target_vars=outputs,
executor=exe, executor=exe,
params_filename="__params__") params_filename="__params__")
else: else:
fluid.io.save_inference_model(dirname=os.path.join( fluid.io.save_inference_model(
save_dir, "inference_model"), dirname=os.path.join(save_dir, "inference_model"),
feeded_var_names=input_names, feeded_var_names=input_names,
target_vars=outputs, target_vars=outputs,
executor=exe, executor=exe,
params_filename=None) params_filename=None)
except: except:
raise Exception( raise Exception(
"Paddle code was saved in {}/model.py, but seems there's wrong exist, please check model.py manually." "Paddle code was saved in {}/model.py, but seems there's wrong exist, please check model.py manually."
......
...@@ -49,13 +49,11 @@ class CaffeResolver(object): ...@@ -49,13 +49,11 @@ class CaffeResolver(object):
class CaffeGraphNode(GraphNode): class CaffeGraphNode(GraphNode):
def __init__(self, layer, type_str, layer_name=None): def __init__(self, layer, type_str, layer_name=None):
if layer_name is None: if layer_name is None:
super(CaffeGraphNode, super(CaffeGraphNode, self).__init__(
self).__init__(layer, layer, layer.name.replace('/', '_').replace('-', '_'))
layer.name.replace('/', '_').replace('-', '_'))
else: else:
super(CaffeGraphNode, super(CaffeGraphNode, self).__init__(
self).__init__(layer, layer, layer_name.replace('/', '_').replace('-', '_'))
layer_name.replace('/', '_').replace('-', '_'))
self.layer_type = type_str self.layer_type = type_str
self.fluid_code = FluidCode() self.fluid_code = FluidCode()
self.data = None self.data = None
...@@ -268,8 +266,8 @@ class CaffeDecoder(object): ...@@ -268,8 +266,8 @@ class CaffeDecoder(object):
c_i = blob.channels c_i = blob.channels
h = blob.height h = blob.height
w = blob.width w = blob.width
data = np.asarray(list(blob.data), data = np.asarray(
dtype=np.float32).reshape(c_o, c_i, h, w) list(blob.data), dtype=np.float32).reshape(c_o, c_i, h, w)
transformed.append(data) transformed.append(data)
return transformed return transformed
因为 它太大了无法显示 source diff 。你可以改为 查看blob
...@@ -71,9 +71,8 @@ class ONNXGraphNode(GraphNode): ...@@ -71,9 +71,8 @@ class ONNXGraphNode(GraphNode):
if attr.type == onnx.AttributeProto.TENSOR: if attr.type == onnx.AttributeProto.TENSOR:
dtype = np.dtype(TENSOR_TYPE_TO_NP_TYPE[attr.t.data_type]) dtype = np.dtype(TENSOR_TYPE_TO_NP_TYPE[attr.t.data_type])
data = attr.t.raw_data data = attr.t.raw_data
value = np.frombuffer(data, value = np.frombuffer(
dtype=dtype, data, dtype=dtype, count=(len(data) // dtype.itemsize))
count=(len(data) // dtype.itemsize))
elif attr.type == onnx.AttributeProto.STRING: elif attr.type == onnx.AttributeProto.STRING:
value = attr.s value = attr.s
value = value.decode() if isinstance(value, bytes) else value value = value.decode() if isinstance(value, bytes) else value
...@@ -205,9 +204,8 @@ class ONNXGraph(Graph): ...@@ -205,9 +204,8 @@ class ONNXGraph(Graph):
self.node_map[name].weight = weight self.node_map[name].weight = weight
self.node_map[name].embeded_as = [] self.node_map[name].embeded_as = []
else: else:
self.node_map[name] = ONNXGraphDataNode(initializer, self.node_map[name] = ONNXGraphDataNode(
layer_name=name, initializer, layer_name=name, is_global_input=False)
is_global_input=False)
self.node_map[name].weight = weight self.node_map[name].weight = weight
self.node_map[name].embeded_as = [] self.node_map[name].embeded_as = []
...@@ -494,8 +492,8 @@ class ONNXDecoder(object): ...@@ -494,8 +492,8 @@ class ONNXDecoder(object):
sess = rt.InferenceSession(model_path) sess = rt.InferenceSession(model_path)
for ipt in sess.get_inputs(): for ipt in sess.get_inputs():
datatype = datatype_map[ipt.type] datatype = datatype_map[ipt.type]
input_dict[ipt.name] = np.random.random( input_dict[ipt.name] = np.random.random(ipt.shape).astype(
ipt.shape).astype(datatype) datatype)
res = sess.run(None, input_feed=input_dict) res = sess.run(None, input_feed=input_dict)
except: except:
......
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import paddle.fluid as fluid
class PaddleDecoder(object):
def __init__(self,
model_dir,
model_filename='__model__',
params_filename=None):
exe = fluid.Executor(fluid.CPUPlace())
[self.program, feed, fetchs] = fluid.io.load_inference_model(
model_dir,
exe,
model_filename=model_filename,
params_filename=params_filename)
...@@ -120,13 +120,13 @@ class TFGraph(Graph): ...@@ -120,13 +120,13 @@ class TFGraph(Graph):
def build(self): def build(self):
for layer in self.model.node: for layer in self.model.node:
self.node_map[layer.name.replace('/', '_').replace( self.node_map[layer.name.replace('/', '_').replace(
'-', '_')] = TFGraphNode(layer, data_format=self.tf_data_format) '-', '_')] = TFGraphNode(
layer, data_format=self.tf_data_format)
for layer_name, node in self.node_map.items(): for layer_name, node in self.node_map.items():
for in_node in node.layer.input: for in_node in node.layer.input:
in_node = in_node.replace('/', in_node = in_node.replace('/', '_').replace('-', '_').replace(
'_').replace('-', '^', '')
'_').replace('^', '')
if in_node not in self.node_map: if in_node not in self.node_map:
if in_node.strip().split(':')[0] in self.node_map: if in_node.strip().split(':')[0] in self.node_map:
self.connect(in_node.strip().split(':')[0], layer_name) self.connect(in_node.strip().split(':')[0], layer_name)
...@@ -390,10 +390,10 @@ class TFDecoder(object): ...@@ -390,10 +390,10 @@ class TFDecoder(object):
shape=shape, shape=shape,
name="x2paddle_{}".format(layer.name)) name="x2paddle_{}".format(layer.name))
except: except:
x2paddle_input = tf.placeholder(dtype=dtype, x2paddle_input = tf.placeholder(
shape=shape, dtype=dtype,
name="x2paddle_{}".format( shape=shape,
layer.name)) name="x2paddle_{}".format(layer.name))
input_map["{}:0".format(layer.name)] = x2paddle_input input_map["{}:0".format(layer.name)] = x2paddle_input
if shape.count(None) > 0: if shape.count(None) > 0:
......
...@@ -122,16 +122,17 @@ def convolutiondepthwise_layer(inputs, ...@@ -122,16 +122,17 @@ def convolutiondepthwise_layer(inputs,
c_out = num_output if num_output is not None else input_shape[0][1] c_out = num_output if num_output is not None else input_shape[0][1]
group = int(c_in / (c_in / c_out)) if c_in > c_out else int(c_in / group = int(c_in / (c_in / c_out)) if c_in > c_out else int(c_in /
(c_out / c_in)) (c_out / c_in))
out = fluid.layers.conv2d(input, out = fluid.layers.conv2d(
dilation=[dila_h, dila_w], input,
filter_size=[k_h, k_w], dilation=[dila_h, dila_w],
stride=[s_h, s_w], filter_size=[k_h, k_w],
padding=[p_h, p_w], stride=[s_h, s_w],
groups=group, padding=[p_h, p_w],
num_filters=c_out, groups=group,
param_attr=name + '_weights', num_filters=c_out,
bias_attr=name + '_bias', param_attr=name + '_weights',
name=name) bias_attr=name + '_bias',
name=name)
return out return out
...@@ -142,7 +143,8 @@ def convolutiondepthwise_weights(name, data=None): ...@@ -142,7 +143,8 @@ def convolutiondepthwise_weights(name, data=None):
return weights_name return weights_name
register(kind='ConvolutionDepthwise', register(
shape=convolutiondepthwise_shape, kind='ConvolutionDepthwise',
layer=convolutiondepthwise_layer, shape=convolutiondepthwise_shape,
weights=convolutiondepthwise_weights) layer=convolutiondepthwise_layer,
weights=convolutiondepthwise_weights)
...@@ -37,8 +37,8 @@ def detectionoutput_layer(inputs, ...@@ -37,8 +37,8 @@ def detectionoutput_layer(inputs,
pbv = fluid.layers.reshape(x=pbv, shape=[-1, 4]) pbv = fluid.layers.reshape(x=pbv, shape=[-1, 4])
mbox_loc = inputs[0] mbox_loc = inputs[0]
mbox_loc = fluid.layers.reshape(x=mbox_loc, shape=[-1, pb.shape[0], 4]) mbox_loc = fluid.layers.reshape(x=mbox_loc, shape=[-1, pb.shape[0], 4])
mbox_conf_flatten = fluid.layers.reshape(x=mbox_conf_flatten, mbox_conf_flatten = fluid.layers.reshape(
shape=[0, pb.shape[0], -1]) x=mbox_conf_flatten, shape=[0, pb.shape[0], -1])
default = {"nms_threshold": 0.3, "top_k": 10, "eta": 1.0} default = {"nms_threshold": 0.3, "top_k": 10, "eta": 1.0}
fields = ['eta', 'top_k', 'nms_threshold'] fields = ['eta', 'top_k', 'nms_threshold']
...@@ -64,7 +64,8 @@ def detectionoutput_weights(name, data=None): ...@@ -64,7 +64,8 @@ def detectionoutput_weights(name, data=None):
return weights_name return weights_name
register(kind='DetectionOutput', register(
shape=detectionoutput_shape, kind='DetectionOutput',
layer=detectionoutput_layer, shape=detectionoutput_shape,
weights=detectionoutput_weights) layer=detectionoutput_layer,
weights=detectionoutput_weights)
...@@ -20,9 +20,8 @@ def normalize_layer(inputs, ...@@ -20,9 +20,8 @@ def normalize_layer(inputs,
attr=name + '_scale') attr=name + '_scale')
scale_param = fluid.layers.reshape(x=scale_param, \ scale_param = fluid.layers.reshape(x=scale_param, \
shape=[1] if channel_shared else [input_shape[0][1]]) shape=[1] if channel_shared else [input_shape[0][1]])
out = fluid.layers.elementwise_mul(x=l2_norm, out = fluid.layers.elementwise_mul(
y=scale_param, x=l2_norm, y=scale_param, axis=-1 if channel_shared else 1)
axis=-1 if channel_shared else 1)
return out return out
...@@ -31,7 +30,8 @@ def normalize_weights(name, data=None): ...@@ -31,7 +30,8 @@ def normalize_weights(name, data=None):
return weights_name return weights_name
register(kind='Normalize', register(
shape=normalize_shape, kind='Normalize',
layer=normalize_layer, shape=normalize_shape,
weights=normalize_weights) layer=normalize_layer,
weights=normalize_weights)
...@@ -23,7 +23,8 @@ def permute_weights(name, data=None): ...@@ -23,7 +23,8 @@ def permute_weights(name, data=None):
return weights_name return weights_name
register(kind='Permute', register(
shape=permute_shape, kind='Permute',
layer=permute_layer, shape=permute_shape,
weights=permute_weights) layer=permute_layer,
weights=permute_weights)
...@@ -30,18 +30,19 @@ def priorbox_layer(inputs, ...@@ -30,18 +30,19 @@ def priorbox_layer(inputs,
steps = tuple(step) if type(step) is list or type(step) is tuple else (step, steps = tuple(step) if type(step) is list or type(step) is tuple else (step,
step) step)
box, variance_ = fluid.layers.prior_box(input, box, variance_ = fluid.layers.prior_box(
image, input,
min_sizes=min_size, image,
max_sizes=max_size, min_sizes=min_size,
aspect_ratios=aspect_ratio, max_sizes=max_size,
variance=variance, aspect_ratios=aspect_ratio,
flip=flip, variance=variance,
clip=clip, flip=flip,
steps=steps, clip=clip,
offset=offset, steps=steps,
name=name, offset=offset,
min_max_aspect_ratios_order=True) name=name,
min_max_aspect_ratios_order=True)
box = fluid.layers.reshape(box, [1, 1, -1]) box = fluid.layers.reshape(box, [1, 1, -1])
variance_ = fluid.layers.reshape(variance_, [1, 1, -1]) variance_ = fluid.layers.reshape(variance_, [1, 1, -1])
out = fluid.layers.concat([box, variance_], axis=1) out = fluid.layers.concat([box, variance_], axis=1)
...@@ -53,7 +54,8 @@ def priorbox_weights(name, data=None): ...@@ -53,7 +54,8 @@ def priorbox_weights(name, data=None):
return weights_name return weights_name
register(kind='PriorBox', register(
shape=priorbox_shape, kind='PriorBox',
layer=priorbox_layer, shape=priorbox_shape,
weights=priorbox_weights) layer=priorbox_layer,
weights=priorbox_weights)
...@@ -23,8 +23,7 @@ def register(kind, shape, layer, weights): ...@@ -23,8 +23,7 @@ def register(kind, shape, layer, weights):
kind = [kind] kind = [kind]
else: else:
assert type( assert type(
kind kind) is list, 'invalid param "kind" for register, not a list or str'
) is list, 'invalid param "kind" for register, not a list or str'
for k in kind: for k in kind:
assert type( assert type(
......
...@@ -21,11 +21,12 @@ def roipooling_layer(inputs, ...@@ -21,11 +21,12 @@ def roipooling_layer(inputs,
input = inputs[0] input = inputs[0]
roi = inputs[1] roi = inputs[1]
roi = fluid.layers.slice(roi, axes=[1], starts=[1], ends=[5]) roi = fluid.layers.slice(roi, axes=[1], starts=[1], ends=[5])
out = fluid.layers.roi_pool(input, out = fluid.layers.roi_pool(
roi, input,
pooled_height=pooled_h, roi,
pooled_width=pooled_w, pooled_height=pooled_h,
spatial_scale=spatial_scale) pooled_width=pooled_w,
spatial_scale=spatial_scale)
return out return out
...@@ -34,7 +35,8 @@ def roipooling_weights(name, data=None): ...@@ -34,7 +35,8 @@ def roipooling_weights(name, data=None):
return weights_name return weights_name
register(kind='ROIPooling', register(
shape=roipooling_shape, kind='ROIPooling',
layer=roipooling_layer, shape=roipooling_shape,
weights=roipooling_weights) layer=roipooling_layer,
weights=roipooling_weights)
...@@ -30,11 +30,12 @@ def select_layer(inputs, ...@@ -30,11 +30,12 @@ def select_layer(inputs,
out = [] out = []
for i in range(len(slice_point)): for i in range(len(slice_point)):
out.append( out.append(
fluid.layers.slice(input, fluid.layers.slice(
axes=[axis], input,
starts=[slice_point[i]], axes=[axis],
ends=[slice_point[i + 1]], starts=[slice_point[i]],
name=name + '_' + str(i))) ends=[slice_point[i + 1]],
name=name + '_' + str(i)))
if i == len(slice_point) - 2: if i == len(slice_point) - 2:
break break
return out return out
...@@ -45,7 +46,8 @@ def select_weights(name, data=None): ...@@ -45,7 +46,8 @@ def select_weights(name, data=None):
return weights_name return weights_name
register(kind='Select', register(
shape=select_shape, kind='Select',
layer=select_layer, shape=select_shape,
weights=select_weights) layer=select_layer,
weights=select_weights)
...@@ -17,7 +17,8 @@ def shufflechannel_weights(name, data=None): ...@@ -17,7 +17,8 @@ def shufflechannel_weights(name, data=None):
return weights_name return weights_name
register(kind='ShuffleChannel', register(
shape=shufflechannel_shape, kind='ShuffleChannel',
layer=shufflechannel_layer, shape=shufflechannel_shape,
weights=shufflechannel_weights) layer=shufflechannel_layer,
weights=shufflechannel_weights)
...@@ -144,8 +144,8 @@ class CaffeOpMapper(OpMapper): ...@@ -144,8 +144,8 @@ class CaffeOpMapper(OpMapper):
[s_h, s_w] = [params.stride] * 2 [s_h, s_w] = [params.stride] * 2
elif len(params.stride) > 0: elif len(params.stride) > 0:
s_h = params.stride_h if params.stride_h > 0 else params.stride[0] s_h = params.stride_h if params.stride_h > 0 else params.stride[0]
s_w = params.stride_w if params.stride_w > 0 else params.stride[ s_w = params.stride_w if params.stride_w > 0 else params.stride[len(
len(params.stride) - 1] params.stride) - 1]
elif params.stride_h > 0 or params.stride_w > 0: elif params.stride_h > 0 or params.stride_w > 0:
s_h = params.stride_h s_h = params.stride_h
s_w = params.stride_w s_w = params.stride_w
...@@ -154,8 +154,8 @@ class CaffeOpMapper(OpMapper): ...@@ -154,8 +154,8 @@ class CaffeOpMapper(OpMapper):
[p_h, p_w] = [params.pad] * 2 [p_h, p_w] = [params.pad] * 2
elif len(params.pad) > 0: elif len(params.pad) > 0:
p_h = params.pad_h if params.pad_h > 0 else params.pad[0] p_h = params.pad_h if params.pad_h > 0 else params.pad[0]
p_w = params.pad_w if params.pad_w > 0 else params.pad[ p_w = params.pad_w if params.pad_w > 0 else params.pad[len(
len(params.pad) - 1] params.pad) - 1]
elif params.pad_h > 0 or params.pad_w > 0: elif params.pad_h > 0 or params.pad_w > 0:
p_h = params.pad_h p_h = params.pad_h
p_w = params.pad_w p_w = params.pad_w
...@@ -195,10 +195,8 @@ class CaffeOpMapper(OpMapper): ...@@ -195,10 +195,8 @@ class CaffeOpMapper(OpMapper):
'shape': shape, 'shape': shape,
'name': string(node.layer_name) 'name': string(node.layer_name)
} }
node.fluid_code.add_layer("data", node.fluid_code.add_layer(
inputs=None, "data", inputs=None, output=node, param_attr=attr)
output=node,
param_attr=attr)
def MemoryData(self, node): def MemoryData(self, node):
# TODO(syf): Paddlepaddle can't fully support # TODO(syf): Paddlepaddle can't fully support
...@@ -209,10 +207,8 @@ class CaffeOpMapper(OpMapper): ...@@ -209,10 +207,8 @@ class CaffeOpMapper(OpMapper):
'shape': shape, 'shape': shape,
'name': string(node.layer_name) 'name': string(node.layer_name)
} }
node.fluid_code.add_layer("data", node.fluid_code.add_layer(
inputs=None, "data", inputs=None, output=node.layer_name + '0', param_attr=attr)
output=node.layer_name + '0',
param_attr=attr)
node.fluid_code.add_note('{} = [{}]'.format(node.layer_name, node.fluid_code.add_note('{} = [{}]'.format(node.layer_name,
node.layer_name + '0')) node.layer_name + '0'))
...@@ -229,11 +225,9 @@ class CaffeOpMapper(OpMapper): ...@@ -229,11 +225,9 @@ class CaffeOpMapper(OpMapper):
input_c = node.input_shape[0][1] input_c = node.input_shape[0][1]
output_c = channel output_c = channel
data.append( data.append(
np.zeros([output_c, input_c, kernel[0], np.zeros([output_c, input_c, kernel[0], kernel[1]]).astype(
kernel[1]]).astype('float32')) 'float32'))
data.append(np.zeros([ data.append(np.zeros([output_c, ])).astype('float32')
output_c,
])).astype('float32')
else: else:
data = self.adjust_parameters(node) data = self.adjust_parameters(node)
self.weights[node.layer_name + '_weights'] = data[0] self.weights[node.layer_name + '_weights'] = data[0]
...@@ -244,29 +238,19 @@ class CaffeOpMapper(OpMapper): ...@@ -244,29 +238,19 @@ class CaffeOpMapper(OpMapper):
input = self.graph.get_bottom_node(node, idx=0, copy=True) input = self.graph.get_bottom_node(node, idx=0, copy=True)
attr = { attr = {
'filter_size': 'filter_size': kernel,
kernel, 'num_filters': channel,
'num_filters': 'stride': stride,
channel, 'padding': pad,
'stride': 'dilation': dilation,
stride, 'groups': group,
'padding': 'name': string(node.layer_name),
pad, 'param_attr': string(node.layer_name + '_weights'),
'dilation': 'bias_attr': False
dilation, if len(data) == 1 else string(node.layer_name + '_bias'),
'groups':
group,
'name':
string(node.layer_name),
'param_attr':
string(node.layer_name + '_weights'),
'bias_attr':
False if len(data) == 1 else string(node.layer_name + '_bias'),
} }
node.fluid_code.add_layer("conv2d", node.fluid_code.add_layer(
inputs=input, "conv2d", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Deconvolution(self, node): def Deconvolution(self, node):
data = node.data data = node.data
...@@ -281,11 +265,9 @@ class CaffeOpMapper(OpMapper): ...@@ -281,11 +265,9 @@ class CaffeOpMapper(OpMapper):
input_c = node.input_shape[0][1] input_c = node.input_shape[0][1]
output_c = channel output_c = channel
data.append( data.append(
np.zeros([output_c, input_c, kernel[0], np.zeros([output_c, input_c, kernel[0], kernel[1]]).astype(
kernel[1]]).astype('float32')) 'float32'))
data.append(np.zeros([ data.append(np.zeros([output_c, ]).astype('float32'))
output_c,
]).astype('float32'))
else: else:
data = self.adjust_parameters(node) data = self.adjust_parameters(node)
self.weights[node.layer_name + '_weights'] = data[0] self.weights[node.layer_name + '_weights'] = data[0]
...@@ -295,31 +277,20 @@ class CaffeOpMapper(OpMapper): ...@@ -295,31 +277,20 @@ class CaffeOpMapper(OpMapper):
) == 1, 'The count of Deconvolution node\'s input is not 1.' ) == 1, 'The count of Deconvolution node\'s input is not 1.'
input = self.graph.get_bottom_node(node, idx=0, copy=True) input = self.graph.get_bottom_node(node, idx=0, copy=True)
attr = { attr = {
'output_size': 'output_size': None,
None, 'filter_size': kernel,
'filter_size': 'num_filters': channel,
kernel, 'stride': stride,
'num_filters': 'padding': pad,
channel, 'dilation': dilation,
'stride': 'groups': group,
stride, 'name': string(node.layer_name),
'padding': 'param_attr': string(node.layer_name + '_weights'),
pad, 'bias_attr': False
'dilation': if len(data) == 1 else string(node.layer_name + '_bias')
dilation,
'groups':
group,
'name':
string(node.layer_name),
'param_attr':
string(node.layer_name + '_weights'),
'bias_attr':
False if len(data) == 1 else string(node.layer_name + '_bias')
} }
node.fluid_code.add_layer("conv2d_transpose", node.fluid_code.add_layer(
inputs=input, "conv2d_transpose", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Pooling(self, node): def Pooling(self, node):
params = node.layer.pooling_param params = node.layer.pooling_param
...@@ -345,10 +316,8 @@ class CaffeOpMapper(OpMapper): ...@@ -345,10 +316,8 @@ class CaffeOpMapper(OpMapper):
'global_pooling': global_pool, 'global_pooling': global_pool,
'name': string(node.layer_name) 'name': string(node.layer_name)
} }
node.fluid_code.add_layer("pool2d", node.fluid_code.add_layer(
inputs=input, "pool2d", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def LRN(self, node): def LRN(self, node):
assert len(node.inputs) == 1, 'The count of LRN node\'s input is not 1.' assert len(node.inputs) == 1, 'The count of LRN node\'s input is not 1.'
...@@ -368,10 +337,8 @@ class CaffeOpMapper(OpMapper): ...@@ -368,10 +337,8 @@ class CaffeOpMapper(OpMapper):
'beta': params.beta, 'beta': params.beta,
'name': string(node.layer_name) 'name': string(node.layer_name)
} }
node.fluid_code.add_layer("lrn", node.fluid_code.add_layer(
inputs=input, "lrn", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def InnerProduct(self, node): def InnerProduct(self, node):
data = node.data data = node.data
...@@ -384,8 +351,8 @@ class CaffeOpMapper(OpMapper): ...@@ -384,8 +351,8 @@ class CaffeOpMapper(OpMapper):
output_c = params.num_output output_c = params.num_output
data = [] data = []
data.append( data.append(
np.zeros([input_c, np.zeros([input_c, output_c]).astype('float32').astype(
output_c]).astype('float32').astype('float32')) 'float32'))
data.append( data.append(
np.zeros([output_c]).astype('float32').astype('float32')) np.zeros([output_c]).astype('float32').astype('float32'))
else: else:
...@@ -409,21 +376,15 @@ class CaffeOpMapper(OpMapper): ...@@ -409,21 +376,15 @@ class CaffeOpMapper(OpMapper):
assert params.bias_term == True assert params.bias_term == True
input = self.graph.get_bottom_node(node, idx=0, copy=True) input = self.graph.get_bottom_node(node, idx=0, copy=True)
attr = { attr = {
'size': 'size': params.num_output,
params.num_output, 'name': string(node.layer_name),
'name': 'act': None,
string(node.layer_name), 'param_attr': string(node.layer_name + '_weights'),
'act': 'bias_attr': False
None, if len(data) == 1 else string(node.layer_name + '_bias')
'param_attr':
string(node.layer_name + '_weights'),
'bias_attr':
False if len(data) == 1 else string(node.layer_name + '_bias')
} }
node.fluid_code.add_layer("fc", node.fluid_code.add_layer(
inputs=input, "fc", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Softmax(self, node): def Softmax(self, node):
assert len( assert len(
...@@ -435,10 +396,8 @@ class CaffeOpMapper(OpMapper): ...@@ -435,10 +396,8 @@ class CaffeOpMapper(OpMapper):
dims = len(shape) dims = len(shape)
axis = axis + dims if axis < 0 else axis axis = axis + dims if axis < 0 else axis
attr = {'axis': axis, 'name': string(node.layer_name + '_softmax')} attr = {'axis': axis, 'name': string(node.layer_name + '_softmax')}
node.fluid_code.add_layer("softmax", node.fluid_code.add_layer(
inputs=input, "softmax", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Slice(self, node): def Slice(self, node):
assert len( assert len(
...@@ -459,10 +418,8 @@ class CaffeOpMapper(OpMapper): ...@@ -459,10 +418,8 @@ class CaffeOpMapper(OpMapper):
'dim': axis, 'dim': axis,
'name': string(node.layer_name) 'name': string(node.layer_name)
} }
node.fluid_code.add_layer("split", node.fluid_code.add_layer(
inputs=input, "split", inputs=input, output=node.layer_name, param_attr=attr)
output=node.layer_name,
param_attr=attr)
def Concat(self, node): def Concat(self, node):
assert len( assert len(
...@@ -475,10 +432,8 @@ class CaffeOpMapper(OpMapper): ...@@ -475,10 +432,8 @@ class CaffeOpMapper(OpMapper):
params = node.layer.concat_param params = node.layer.concat_param
axis = params.axis axis = params.axis
attr = {'axis': axis, 'name': string(node.layer_name)} attr = {'axis': axis, 'name': string(node.layer_name)}
node.fluid_code.add_layer("concat", node.fluid_code.add_layer(
inputs=inputs, "concat", inputs=inputs, output=node, param_attr=attr)
output=node,
param_attr=attr)
def PReLU(self, node): def PReLU(self, node):
assert len( assert len(
...@@ -499,10 +454,8 @@ class CaffeOpMapper(OpMapper): ...@@ -499,10 +454,8 @@ class CaffeOpMapper(OpMapper):
'param_attr': string(node.layer_name + '_weights'), 'param_attr': string(node.layer_name + '_weights'),
'name': string(node.layer_name) 'name': string(node.layer_name)
} }
node.fluid_code.add_layer("prelu", node.fluid_code.add_layer(
inputs=input, "prelu", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Accuracy(self, node): def Accuracy(self, node):
assert len( assert len(
...@@ -526,10 +479,8 @@ class CaffeOpMapper(OpMapper): ...@@ -526,10 +479,8 @@ class CaffeOpMapper(OpMapper):
assert axis == 1, 'PaddlePaddle can not support the situation when the axis is not 1.' assert axis == 1, 'PaddlePaddle can not support the situation when the axis is not 1.'
assert not ignore_label >= 0, 'PaddlePaddle can not support the situation when the model has ignore label.' assert not ignore_label >= 0, 'PaddlePaddle can not support the situation when the model has ignore label.'
attr = {'k': top_k} attr = {'k': top_k}
node.fluid_code.add_layer("accuracy", node.fluid_code.add_layer(
inputs=inputs, "accuracy", inputs=inputs, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Eltwise(self, node): def Eltwise(self, node):
assert len( assert len(
...@@ -546,10 +497,11 @@ class CaffeOpMapper(OpMapper): ...@@ -546,10 +497,11 @@ class CaffeOpMapper(OpMapper):
inputs_dict['x'] = inputs[0] inputs_dict['x'] = inputs[0]
inputs_dict['y'] = inputs[1] inputs_dict['y'] = inputs[1]
attr = {'act': None, 'name': string(node.layer_name)} attr = {'act': None, 'name': string(node.layer_name)}
node.fluid_code.add_layer("elementwise_mul", node.fluid_code.add_layer(
inputs=inputs_dict, "elementwise_mul",
output=node, inputs=inputs_dict,
param_attr=attr) output=node,
param_attr=attr)
elif mode == 1: elif mode == 1:
if hasattr(params, 'coeff') and len(params.coeff) == 2: if hasattr(params, 'coeff') and len(params.coeff) == 2:
coeff = params.coeff coeff = params.coeff
...@@ -559,57 +511,62 @@ class CaffeOpMapper(OpMapper): ...@@ -559,57 +511,62 @@ class CaffeOpMapper(OpMapper):
'value': coeff[0], 'value': coeff[0],
'dtype': '{}.dtype'.format(input1_name) 'dtype': '{}.dtype'.format(input1_name)
} }
node.fluid_code.add_layer("fill_constant", node.fluid_code.add_layer(
inputs=None, "fill_constant",
output=node.layer_name + '_const1', inputs=None,
param_attr=attr) output=node.layer_name + '_const1',
param_attr=attr)
attr = {'act': None, 'name': string(node.layer_name + '_mul1')} attr = {'act': None, 'name': string(node.layer_name + '_mul1')}
node.fluid_code.add_layer("elementwise_mul", node.fluid_code.add_layer(
inputs=input1_name + ', ' + "elementwise_mul",
node.layer_name + '_const1', inputs=input1_name + ', ' + node.layer_name + '_const1',
output=node.layer_name + '_mul1', output=node.layer_name + '_mul1',
param_attr=attr) param_attr=attr)
input2_name = self.get_input_name(inputs[1]) input2_name = self.get_input_name(inputs[1])
attr = { attr = {
'shape': [1], 'shape': [1],
'value': coeff[1], 'value': coeff[1],
'dtype': '{}.dtype'.format(input2_name) 'dtype': '{}.dtype'.format(input2_name)
} }
node.fluid_code.add_layer("fill_constant", node.fluid_code.add_layer(
inputs=None, "fill_constant",
output=node.layer_name + '_const2', inputs=None,
param_attr=attr) output=node.layer_name + '_const2',
param_attr=attr)
attr = {'act': None, 'name': string(node.layer_name + '_mul2')} attr = {'act': None, 'name': string(node.layer_name + '_mul2')}
node.fluid_code.add_layer("elementwise_mul", node.fluid_code.add_layer(
inputs=input2_name + ', ' + "elementwise_mul",
node.layer_name + '_const2', inputs=input2_name + ', ' + node.layer_name + '_const2',
output=node.layer_name + '_mul2', output=node.layer_name + '_mul2',
param_attr=attr) param_attr=attr)
attr = {'act': None, 'name': string(node.layer_name)} attr = {'act': None, 'name': string(node.layer_name)}
node.fluid_code.add_layer("elementwise_add", node.fluid_code.add_layer(
inputs='{}_mul1, {}_mul2'.format( "elementwise_add",
node.layer_name, node.layer_name), inputs='{}_mul1, {}_mul2'.format(node.layer_name,
output=node, node.layer_name),
param_attr=attr) output=node,
param_attr=attr)
else: else:
inputs_dict = {} inputs_dict = {}
inputs_dict['x'] = inputs[0] inputs_dict['x'] = inputs[0]
inputs_dict['y'] = inputs[1] inputs_dict['y'] = inputs[1]
attr = {'act': None, 'name': string(node.layer_name)} attr = {'act': None, 'name': string(node.layer_name)}
node.fluid_code.add_layer("elementwise_add", node.fluid_code.add_layer(
inputs=inputs_dict, "elementwise_add",
output=node, inputs=inputs_dict,
param_attr=attr) output=node,
param_attr=attr)
else: else:
inputs_dict = {} inputs_dict = {}
inputs_dict['x'] = inputs[0] inputs_dict['x'] = inputs[0]
inputs_dict['y'] = inputs[1] inputs_dict['y'] = inputs[1]
attr = {'act': None, 'name': string(node.layer_name)} attr = {'act': None, 'name': string(node.layer_name)}
node.fluid_code.add_layer("elementwise_max", node.fluid_code.add_layer(
inputs=inputs_dict, "elementwise_max",
output=node, inputs=inputs_dict,
param_attr=attr) output=node,
param_attr=attr)
def BatchNorm(self, node): def BatchNorm(self, node):
assert len( assert len(
...@@ -625,12 +582,8 @@ class CaffeOpMapper(OpMapper): ...@@ -625,12 +582,8 @@ class CaffeOpMapper(OpMapper):
'The parameter of {} (type is {}) is not set. So we set the parameters as 0' 'The parameter of {} (type is {}) is not set. So we set the parameters as 0'
.format(node.layer_name, node.layer_type)) .format(node.layer_name, node.layer_type))
input_c = node.input_shape[0][1] input_c = node.input_shape[0][1]
mean = np.zeros([ mean = np.zeros([input_c, ]).astype('float32')
input_c, variance = np.zeros([input_c, ]).astype('float32')
]).astype('float32')
variance = np.zeros([
input_c,
]).astype('float32')
scale = 0 scale = 0
else: else:
...@@ -651,10 +604,8 @@ class CaffeOpMapper(OpMapper): ...@@ -651,10 +604,8 @@ class CaffeOpMapper(OpMapper):
'epsilon': eps, 'epsilon': eps,
'name': string(node.layer_name) 'name': string(node.layer_name)
} }
node.fluid_code.add_layer("batch_norm", node.fluid_code.add_layer(
inputs=input, "batch_norm", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Scale(self, node): def Scale(self, node):
if node.data is None: if node.data is None:
...@@ -669,10 +620,10 @@ class CaffeOpMapper(OpMapper): ...@@ -669,10 +620,10 @@ class CaffeOpMapper(OpMapper):
input_c, input_c,
]).astype('float32') ]).astype('float32')
else: else:
self.weights[node.layer_name + '_scale'] = np.squeeze( self.weights[node.layer_name + '_scale'] = np.squeeze(node.data[
node.data[0]).astype('float32') 0]).astype('float32')
self.weights[node.layer_name + '_offset'] = np.squeeze( self.weights[node.layer_name + '_offset'] = np.squeeze(node.data[
node.data[1]).astype('float32') 1]).astype('float32')
params = node.layer.scale_param params = node.layer.scale_param
axis = params.axis axis = params.axis
num_axes = params.num_axes num_axes = params.num_axes
...@@ -687,10 +638,11 @@ class CaffeOpMapper(OpMapper): ...@@ -687,10 +638,11 @@ class CaffeOpMapper(OpMapper):
inputs_dict['x'] = input0 inputs_dict['x'] = input0
inputs_dict['y'] = input1 inputs_dict['y'] = input1
attr = {'axis': axis, 'name': string(node.layer_name + '_mul')} attr = {'axis': axis, 'name': string(node.layer_name + '_mul')}
node.fluid_code.add_layer("elementwise_mul", node.fluid_code.add_layer(
inputs=inputs_dict, "elementwise_mul",
output=node.layer_name + '_mul', inputs=inputs_dict,
param_attr=attr) output=node.layer_name + '_mul',
param_attr=attr)
else: else:
bias_shape = node.input_shape[0][axis:axis + num_axes] bias_shape = node.input_shape[0][axis:axis + num_axes]
input0 = self.graph.get_bottom_node(node, idx=0, copy=True) input0 = self.graph.get_bottom_node(node, idx=0, copy=True)
...@@ -703,18 +655,17 @@ class CaffeOpMapper(OpMapper): ...@@ -703,18 +655,17 @@ class CaffeOpMapper(OpMapper):
'is_bias': True, 'is_bias': True,
'default_initializer': 'Constant(value=1.0)' 'default_initializer': 'Constant(value=1.0)'
} }
node.fluid_code.add_layer("create_parameter", node.fluid_code.add_layer(
inputs=None, "create_parameter", inputs=None, output=node, param_attr=attr)
output=node,
param_attr=attr)
inputs_dict = {} inputs_dict = {}
inputs_dict['x'] = input0 inputs_dict['x'] = input0
inputs_dict['y'] = node inputs_dict['y'] = node
attr = {'axis': axis, 'name': string(node.layer_name + '_mul')} attr = {'axis': axis, 'name': string(node.layer_name + '_mul')}
node.fluid_code.add_layer("elementwise_mul", node.fluid_code.add_layer(
inputs=inputs_dict, "elementwise_mul",
output=node.layer_name + '_mul', inputs=inputs_dict,
param_attr=attr) output=node.layer_name + '_mul',
param_attr=attr)
scale_shape = bias_shape scale_shape = bias_shape
input0_name = self.get_input_name(input0) input0_name = self.get_input_name(input0)
attr = { attr = {
...@@ -725,16 +676,18 @@ class CaffeOpMapper(OpMapper): ...@@ -725,16 +676,18 @@ class CaffeOpMapper(OpMapper):
'is_bias': True, 'is_bias': True,
'default_initializer': 'Constant(value=1.0)' 'default_initializer': 'Constant(value=1.0)'
} }
node.fluid_code.add_layer("create_parameter", node.fluid_code.add_layer(
inputs=None, "create_parameter",
output=node.layer_name + '_offset_param', inputs=None,
param_attr=attr) output=node.layer_name + '_offset_param',
param_attr=attr)
attr = {'axis': axis, 'name': string(node.layer_name + '_add')} attr = {'axis': axis, 'name': string(node.layer_name + '_add')}
node.fluid_code.add_layer("elementwise_add", node.fluid_code.add_layer(
inputs='{}_mul, {}_offset_param'.format( "elementwise_add",
node.layer_name, node.layer_name), inputs='{}_mul, {}_offset_param'.format(node.layer_name,
output=node, node.layer_name),
param_attr=attr) output=node,
param_attr=attr)
def Reshape(self, node): def Reshape(self, node):
input = self.graph.get_bottom_node(node, idx=0, copy=True) input = self.graph.get_bottom_node(node, idx=0, copy=True)
...@@ -747,10 +700,8 @@ class CaffeOpMapper(OpMapper): ...@@ -747,10 +700,8 @@ class CaffeOpMapper(OpMapper):
'act': None, 'act': None,
'name': string(node.layer_name) 'name': string(node.layer_name)
} }
node.fluid_code.add_layer("reshape", node.fluid_code.add_layer(
inputs=input, "reshape", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def ArgMax(self, node): def ArgMax(self, node):
assert len(node.inputs) == 1 and len( assert len(node.inputs) == 1 and len(
...@@ -767,11 +718,12 @@ class CaffeOpMapper(OpMapper): ...@@ -767,11 +718,12 @@ class CaffeOpMapper(OpMapper):
axis += len(input_shape) axis += len(input_shape)
if out_max_val is True: if out_max_val is True:
attr = {'k': top_k, 'name': string(node.layer_name + '_topk')} attr = {'k': top_k, 'name': string(node.layer_name + '_topk')}
node.fluid_code.add_layer("topk", node.fluid_code.add_layer(
inputs=input, "topk",
output='{}_topk_var, {}_index_var'.format( inputs=input,
node.layer_name, node.layer_name), output='{}_topk_var, {}_index_var'.format(node.layer_name,
param_attr=attr) node.layer_name),
param_attr=attr)
attr = {'dtype': '{}_topk_var.dtype'.format(node.layer_name)} attr = {'dtype': '{}_topk_var.dtype'.format(node.layer_name)}
node.fluid_code.add_layer( node.fluid_code.add_layer(
"cast", "cast",
...@@ -779,17 +731,19 @@ class CaffeOpMapper(OpMapper): ...@@ -779,17 +731,19 @@ class CaffeOpMapper(OpMapper):
output='{}_index_var'.format(node.layer_name), output='{}_index_var'.format(node.layer_name),
param_attr=attr) param_attr=attr)
attr = {'axis': axis, 'name': string(node.layer_name)} attr = {'axis': axis, 'name': string(node.layer_name)}
node.fluid_code.add_layer("concat", node.fluid_code.add_layer(
inputs='{}_topk_var, {}_index_var'.format( "concat",
node.layer_name, node.layer_name), inputs='{}_topk_var, {}_index_var'.format(node.layer_name,
output=node, node.layer_name),
param_attr=attr) output=node,
param_attr=attr)
else: else:
attr = {'k': top_k, 'name': string(node.layer_name)} attr = {'k': top_k, 'name': string(node.layer_name)}
node.fluid_code.add_layer("topk", node.fluid_code.add_layer(
inputs=input, "topk",
output='_, {}'.format(node.layer_name), inputs=input,
param_attr=attr) output='_, {}'.format(node.layer_name),
param_attr=attr)
def Crop(self, node): def Crop(self, node):
assert len( assert len(
...@@ -804,29 +758,27 @@ class CaffeOpMapper(OpMapper): ...@@ -804,29 +758,27 @@ class CaffeOpMapper(OpMapper):
offset_real = [0] * len(input_shape) offset_real = [0] * len(input_shape)
if hasattr(params, "offset") and len(params.offset) > 0: if hasattr(params, "offset") and len(params.offset) > 0:
offset = list(params.offset) offset = list(params.offset)
assert (len(input_shape) - axis) == len( assert (len(input_shape) - axis
offset), "invalid offset[%s] in crop layer" % (str(offset)) ) == len(offset), "invalid offset[%s] in crop layer" % (
str(offset))
offset_real = [0] * axis + offset offset_real = [0] * axis + offset
attr = {'offsets': list(offset_real), 'name': string(node.layer_name)} attr = {'offsets': list(offset_real), 'name': string(node.layer_name)}
node.fluid_code.add_layer("crop", node.fluid_code.add_layer(
inputs={ "crop",
'x': input, inputs={'x': input,
'shape': node.input_shape[1] 'shape': node.input_shape[1]},
}, output=node,
output=node, param_attr=attr)
param_attr=attr)
def Flatten(self, node): def Flatten(self, node):
assert len( assert len(
node.inputs node.
) == 1, 'The count of DetectionOutput node\'s input is not 1.' inputs) == 1, 'The count of DetectionOutput node\'s input is not 1.'
input = self.graph.get_bottom_node(node, idx=0, copy=True) input = self.graph.get_bottom_node(node, idx=0, copy=True)
shape = node.output_shape[0] shape = node.output_shape[0]
attr = {'shape': shape, 'name': string(node.layer_name)} attr = {'shape': shape, 'name': string(node.layer_name)}
node.fluid_code.add_layer("reshape", node.fluid_code.add_layer(
inputs=input, "reshape", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Power(self, node): def Power(self, node):
assert len( assert len(
...@@ -842,15 +794,11 @@ class CaffeOpMapper(OpMapper): ...@@ -842,15 +794,11 @@ class CaffeOpMapper(OpMapper):
'bias_after_scale': True, 'bias_after_scale': True,
'name': string(node.layer_name + '_scale') 'name': string(node.layer_name + '_scale')
} }
node.fluid_code.add_layer("scale", node.fluid_code.add_layer(
inputs=input, "scale", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
attr = {'factor': power, 'name': string(node.layer_name)} attr = {'factor': power, 'name': string(node.layer_name)}
node.fluid_code.add_layer("pow", node.fluid_code.add_layer(
inputs=node, "pow", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Reduction(self, node): def Reduction(self, node):
assert len( assert len(
...@@ -872,55 +820,41 @@ class CaffeOpMapper(OpMapper): ...@@ -872,55 +820,41 @@ class CaffeOpMapper(OpMapper):
'keep_dim': False, 'keep_dim': False,
'name': string(node.layer_name) 'name': string(node.layer_name)
} }
node.fluid_code.add_layer("reduce_sum", node.fluid_code.add_layer(
inputs=input, "reduce_sum", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
elif operation == 2: ## operation = ASUM elif operation == 2: ## operation = ASUM
attr = {'name': string(node.layer_name + '_abs')} attr = {'name': string(node.layer_name + '_abs')}
node.fluid_code.add_layer("abs", node.fluid_code.add_layer(
inputs=input, "abs", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
attr = { attr = {
'dim': dim[axis:], 'dim': dim[axis:],
'keep_dim': False, 'keep_dim': False,
'name': string(node.layer_name) 'name': string(node.layer_name)
} }
node.fluid_code.add_layer("reduce_sum", node.fluid_code.add_layer(
inputs=node, "reduce_sum", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
elif operation == 3: ## operation = SUMSQ elif operation == 3: ## operation = SUMSQ
attr = {'factor': 2.0, 'name': string(node.layer_name + '_pow')} attr = {'factor': 2.0, 'name': string(node.layer_name + '_pow')}
node.fluid_code.add_layer("pow", node.fluid_code.add_layer(
inputs=input, "pow", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
attr = { attr = {
'dim': dim[axis:], 'dim': dim[axis:],
'keep_dim': False, 'keep_dim': False,
'name': string(node.layer_name) 'name': string(node.layer_name)
} }
node.fluid_code.add_layer("reduce_sum", node.fluid_code.add_layer(
inputs=node, "reduce_sum", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
else: ## operation = MEAN else: ## operation = MEAN
attr = { attr = {
'dim': dim[axis:], 'dim': dim[axis:],
'keep_dim': False, 'keep_dim': False,
'name': string(node.layer_name) 'name': string(node.layer_name)
} }
node.fluid_code.add_layer("reduce_mean", node.fluid_code.add_layer(
inputs=node, "reduce_mean", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
attr = {'scale': coeff} attr = {'scale': coeff}
node.fluid_code.add_layer("scale", node.fluid_code.add_layer(
inputs=node, "scale", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
def deal_custom_layer(self, node): def deal_custom_layer(self, node):
op = node.layer_type op = node.layer_type
...@@ -947,11 +881,12 @@ class CaffeOpMapper(OpMapper): ...@@ -947,11 +881,12 @@ class CaffeOpMapper(OpMapper):
assert input is not None, 'This kind of DetectionOutput is not supported!' assert input is not None, 'This kind of DetectionOutput is not supported!'
input = self.graph.get_bottom_node(input, idx=0, copy=True) input = self.graph.get_bottom_node(input, idx=0, copy=True)
inputs_node.append(input) inputs_node.append(input)
node.fluid_code.add_layer(func.__code__.co_name, node.fluid_code.add_layer(
inputs=inputs_node, func.__code__.co_name,
output=node, inputs=inputs_node,
param_attr=kwargs, output=node,
is_custom_layer=True) param_attr=kwargs,
is_custom_layer=True)
if op not in self.used_custom_layers: if op not in self.used_custom_layers:
self.used_custom_layers[op] = custom_code self.used_custom_layers[op] = custom_code
...@@ -960,7 +895,5 @@ class CaffeOpMapper(OpMapper): ...@@ -960,7 +895,5 @@ class CaffeOpMapper(OpMapper):
op_info = self.directly_map_ops[node.layer_type] op_info = self.directly_map_ops[node.layer_type]
input = self.graph.get_bottom_node(node, idx=0, copy=True) input = self.graph.get_bottom_node(node, idx=0, copy=True)
attr = {'name': string(node.layer_name)} attr = {'name': string(node.layer_name)}
node.fluid_code.add_layer(op_info, node.fluid_code.add_layer(
inputs=input, op_info, inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
...@@ -33,8 +33,8 @@ def get_kernel_parameters(params): ...@@ -33,8 +33,8 @@ def get_kernel_parameters(params):
[s_h, s_w] = [params.stride] * 2 [s_h, s_w] = [params.stride] * 2
elif len(params.stride) > 0: elif len(params.stride) > 0:
s_h = params.stride_h if params.stride_h > 0 else params.stride[0] s_h = params.stride_h if params.stride_h > 0 else params.stride[0]
s_w = params.stride_w if params.stride_w > 0 else params.stride[ s_w = params.stride_w if params.stride_w > 0 else params.stride[len(
len(params.stride) - 1] params.stride) - 1]
elif params.stride_h > 0 or params.stride_w > 0: elif params.stride_h > 0 or params.stride_w > 0:
s_h = params.stride_h s_h = params.stride_h
s_w = params.stride_w s_w = params.stride_w
......
...@@ -24,21 +24,18 @@ def InstanceNormalization_layer(inputs, name=None): ...@@ -24,21 +24,18 @@ def InstanceNormalization_layer(inputs, name=None):
epsilon = 1e-5 epsilon = 1e-5
input_ = inputs[0] input_ = inputs[0]
mean = fluid.layers.reduce_mean(input_, dim=[2, 3], keep_dim=True) mean = fluid.layers.reduce_mean(input_, dim=[2, 3], keep_dim=True)
var = fluid.layers.reduce_mean(fluid.layers.square(input_ - mean), var = fluid.layers.reduce_mean(
dim=[2, 3], fluid.layers.square(input_ - mean), dim=[2, 3], keep_dim=True)
keep_dim=True)
if name is not None: if name is not None:
scale_name = name + "_scale" scale_name = name + "_scale"
offset_name = name + "_offset" offset_name = name + "_offset"
scale_param = inputs[1] scale_param = inputs[1]
offset_param = inputs[2] offset_param = inputs[2]
scale = fluid.layers.create_parameter(name=scale_param.name, scale = fluid.layers.create_parameter(
shape=input_.shape[1:2], name=scale_param.name, shape=input_.shape[1:2], dtype="float32")
dtype="float32") offset = fluid.layers.create_parameter(
offset = fluid.layers.create_parameter(name=offset_param.name, name=offset_param.name, shape=input_.shape[1:2], dtype="float32")
shape=input_.shape[1:2],
dtype="float32")
tmp = fluid.layers.elementwise_mul(x=(input_ - mean), y=scale, axis=1) tmp = fluid.layers.elementwise_mul(x=(input_ - mean), y=scale, axis=1)
tmp = tmp / fluid.layers.sqrt(var + epsilon) tmp = tmp / fluid.layers.sqrt(var + epsilon)
...@@ -51,8 +48,9 @@ def InstanceNormalization_weights(name, data=None): ...@@ -51,8 +48,9 @@ def InstanceNormalization_weights(name, data=None):
return weights_name return weights_name
register(kind='InstanceNormalization', register(
shape=InstanceNormalization_shape, kind='InstanceNormalization',
layer=InstanceNormalization_layer, shape=InstanceNormalization_shape,
child_func=None, layer=InstanceNormalization_layer,
weights=InstanceNormalization_weights) child_func=None,
weights=InstanceNormalization_weights)
...@@ -36,8 +36,7 @@ def register(kind, shape, layer, child_func, weights): ...@@ -36,8 +36,7 @@ def register(kind, shape, layer, child_func, weights):
kind = [kind] kind = [kind]
else: else:
assert type( assert type(
kind kind) is list, 'invalid param "kind" for register, not a list or str'
) is list, 'invalid param "kind" for register, not a list or str'
for k in kind: for k in kind:
assert type( assert type(
......
...@@ -28,61 +28,55 @@ default_op_mapping_field_values['FILL_NAME_FIELD'] = True ...@@ -28,61 +28,55 @@ default_op_mapping_field_values['FILL_NAME_FIELD'] = True
default_op_mapping = { default_op_mapping = {
'Shape': ['shape', ['X'], ['Out']], 'Shape': ['shape', ['X'], ['Out']],
'Clip': [ 'Clip': [
'clip', ['X'], ['Out'], 'clip', ['X'], ['Out'], dict(), dict(
dict(), min=(_np.asarray(
dict( [255, 255, 127, 255], dtype=_np.uint8).view(_np.float32)[0]),
min=(_np.asarray([255, 255, 127, 255], max=(_np.asarray(
dtype=_np.uint8).view(_np.float32)[0]), [255, 255, 127, 127], dtype=_np.uint8).view(_np.float32)[0]), )
max=(_np.asarray([255, 255, 127, 127],
dtype=_np.uint8).view(_np.float32)[0]),
)
], ],
'Erf': ['erf', ['X'], ['Out']], 'Erf': ['erf', ['X'], ['Out']],
'Ceil': ['ceil', ['X'], ['Out']], 'Ceil': ['ceil', ['X'], ['Out']],
'ReduceMean': [ 'ReduceMean': [
'reduce_mean', ['X'], ['Out'], 'reduce_mean', ['X'], ['Out'], dict(
dict(axes='dim', keepdims='keep_dim'), axes='dim', keepdims='keep_dim'), dict(keep_dim=1)
dict(keep_dim=1)
], ],
'ReduceSum': [ 'ReduceSum': [
'reduce_sum', ['X'], ['Out'], 'reduce_sum', ['X'], ['Out'], dict(
dict(axes='dim', keepdims='keep_dim'), axes='dim', keepdims='keep_dim'), dict(keep_dim=1)
dict(keep_dim=1)
], ],
'ReduceMin': [ 'ReduceMin': [
'reduce_min', ['X'], ['Out'], 'reduce_min', ['X'], ['Out'], dict(
dict(axes='dim', keepdims='keep_dim'), axes='dim', keepdims='keep_dim'), dict(keep_dim=1)
dict(keep_dim=1) ],
'ReduceMax': [
'reduce_max', ['X'], ['Out'], dict(
axes='dim', keepdims='keep_dim'), dict(keep_dim=1)
], ],
#active function #active function
'Relu': ['relu', ['X'], ['Out']], 'Relu': ['relu', ['X'], ['Out']],
'LeakyRelu': ['leaky_relu', ['X'], ['Out'], 'LeakyRelu': ['leaky_relu', ['X'], ['Out'], dict(), dict(alpha=.01)],
dict(), dict(alpha=.01)], 'Elu': ['elu', ['X'], ['Out'], dict(), dict(alpha=1.)],
'Elu': ['elu', ['X'], ['Out'],
dict(), dict(alpha=1.)],
'ThresholdedRelu': [ 'ThresholdedRelu': [
'thresholded_relu', ['X'], ['Out'], 'thresholded_relu', ['X'], ['Out'], dict(alpha='threshold'),
dict(alpha='threshold'),
dict(alpha=1.) dict(alpha=1.)
], ],
'Tanh': ['tanh', ['X'], ['Out']], 'Tanh': ['tanh', ['X'], ['Out']],
'Sigmoid': ['sigmoid', ['X'], ['Out']], 'Sigmoid': ['sigmoid', ['X'], ['Out']],
'HardSigmoid': [ 'HardSigmoid': [
'hard_sigmoid', ['X'], ['Out'], 'hard_sigmoid', ['X'], ['Out'], dict(
dict(alpha='slope', beta='offset'), alpha='slope', beta='offset'), dict(
dict(slope=.2, offset=.5) slope=.2, offset=.5)
], ],
'Softsign': ['softsign', ['X'], ['Out']], 'Softsign': ['softsign', ['X'], ['Out']],
'Softplus': ['softplus', ['X'], ['Out']], 'Softplus': ['softplus', ['X'], ['Out']],
'Exp': ['exp', ['X'], ['Out']], 'Exp': ['exp', ['X'], ['Out']],
'Softmax': ['softmax', ['X'], ['Out'], 'Softmax': ['softmax', ['X'], ['Out'], dict(), dict(axis=1)],
dict(), dict(axis=1)],
'Sqrt': ['sqrt', ['X'], ['Out']], 'Sqrt': ['sqrt', ['X'], ['Out']],
'Floor': ['floor', ['X'], ['Out']], 'Floor': ['floor', ['X'], ['Out']],
'Abs': ['abs', ['X'], ['Out']], 'Abs': ['abs', ['X'], ['Out']],
} }
default_ioa_constraint = { default_ioa_constraint = {
'Gather': 'Gather': [(lambda i, o, a: a.get('axis', 0) == 0,
[(lambda i, o, a: a.get('axis', 0) == 0, 'only axis = 0 is supported')], 'only axis = 0 is supported')],
} }
...@@ -140,8 +140,8 @@ class ONNXOpMapper(OpMapper): ...@@ -140,8 +140,8 @@ class ONNXOpMapper(OpMapper):
model.graph.ClearField('output') model.graph.ClearField('output')
model.graph.output.MergeFrom(model.graph.value_info) model.graph.output.MergeFrom(model.graph.value_info)
onnx.save(model, os.path.join(self.tmp_data_dir, onnx.save(model,
'onnx_model_infer.onnx')) os.path.join(self.tmp_data_dir, 'onnx_model_infer.onnx'))
sess = rt.InferenceSession( sess = rt.InferenceSession(
os.path.join(self.tmp_data_dir, 'onnx_model_infer.onnx')) os.path.join(self.tmp_data_dir, 'onnx_model_infer.onnx'))
res = sess.run(None, input_feed=inputs_dict) res = sess.run(None, input_feed=inputs_dict)
...@@ -217,8 +217,7 @@ class ONNXOpMapper(OpMapper): ...@@ -217,8 +217,7 @@ class ONNXOpMapper(OpMapper):
default_attrs, default_attrs,
input_perm, input_perm,
output_perm, output_perm,
fill_name_field, fill_name_field, ) = info
) = info
if fluid_op in default_ioa_constraint: if fluid_op in default_ioa_constraint:
for predicate, message in default_ioa_constraint[fluid_op]: for predicate, message in default_ioa_constraint[fluid_op]:
...@@ -246,10 +245,8 @@ class ONNXOpMapper(OpMapper): ...@@ -246,10 +245,8 @@ class ONNXOpMapper(OpMapper):
assert len(val_inps) == 1, 'directly_map error with multi inputs' assert len(val_inps) == 1, 'directly_map error with multi inputs'
if fluid_op not in ['shape']: if fluid_op not in ['shape']:
attr['name'] = string(node.layer_name) attr['name'] = string(node.layer_name)
node.fluid_code.add_layer(fluid_op, node.fluid_code.add_layer(
inputs=val_inps[0], fluid_op, inputs=val_inps[0], output=val_outs[0], param_attr=attr)
output=val_outs[0],
param_attr=attr)
def deal_custom_layer(self, node): def deal_custom_layer(self, node):
op = node.layer_type op = node.layer_type
...@@ -258,11 +255,12 @@ class ONNXOpMapper(OpMapper): ...@@ -258,11 +255,12 @@ class ONNXOpMapper(OpMapper):
params = get_params(node.layer, node.layer_type) params = get_params(node.layer, node.layer_type)
arg_names, kwargs = set_args(func, params) arg_names, kwargs = set_args(func, params)
kwargs['name'] = string(node.layer_name) kwargs['name'] = string(node.layer_name)
node.fluid_code.add_layer(func.__code__.co_name, node.fluid_code.add_layer(
inputs=node.inputs, func.__code__.co_name,
output=node, inputs=node.inputs,
param_attr=kwargs, output=node,
is_custom_layer=True) param_attr=kwargs,
is_custom_layer=True)
if op not in self.used_custom_layers: if op not in self.used_custom_layers:
self.used_custom_layers[op] = custom_code self.used_custom_layers[op] = custom_code
if op + '_child_func' not in self.used_custom_layers: if op + '_child_func' not in self.used_custom_layers:
...@@ -299,21 +297,18 @@ class ONNXOpMapper(OpMapper): ...@@ -299,21 +297,18 @@ class ONNXOpMapper(OpMapper):
'shape': val_y_reshaped, 'shape': val_y_reshaped,
'name': string(var_y_reshaped) 'name': string(var_y_reshaped)
} }
node.fluid_code.add_layer('reshape', node.fluid_code.add_layer(
inputs=val_y, 'reshape',
output=var_y_reshaped, inputs=val_y,
param_attr=attr_reshaped) output=var_y_reshaped,
param_attr=attr_reshaped)
inputs = {'x': val_x, 'y': var_y_reshaped} inputs = {'x': val_x, 'y': var_y_reshaped}
node.fluid_code.add_layer(op_type, node.fluid_code.add_layer(
inputs=inputs, op_type, inputs=inputs, output=node, param_attr=attr)
output=node,
param_attr=attr)
else: else:
inputs = {'x': val_x, 'y': val_y} inputs = {'x': val_x, 'y': val_y}
node.fluid_code.add_layer(op_type, node.fluid_code.add_layer(
inputs=inputs, op_type, inputs=inputs, output=node, param_attr=attr)
output=node,
param_attr=attr)
def place_holder(self, node): def place_holder(self, node):
self.input_shapes.append(node.out_shapes[0]) self.input_shapes.append(node.out_shapes[0])
...@@ -331,10 +326,8 @@ class ONNXOpMapper(OpMapper): ...@@ -331,10 +326,8 @@ class ONNXOpMapper(OpMapper):
"append_batch_size": 'False' "append_batch_size": 'False'
} }
node.fluid_code.add_layer("data", node.fluid_code.add_layer(
inputs=None, "data", inputs=None, output=node, param_attr=attr)
output=node,
param_attr=attr)
def create_parameter(self, node, parameter=None): def create_parameter(self, node, parameter=None):
if parameter is not None: if parameter is not None:
...@@ -351,10 +344,8 @@ class ONNXOpMapper(OpMapper): ...@@ -351,10 +344,8 @@ class ONNXOpMapper(OpMapper):
'attr': string(node.layer_name), 'attr': string(node.layer_name),
'default_initializer': 'Constant(0.0)' 'default_initializer': 'Constant(0.0)'
} }
node.fluid_code.add_layer("create_parameter", node.fluid_code.add_layer(
inputs=None, "create_parameter", inputs=None, output=node, param_attr=attr)
output=node,
param_attr=attr)
def _pad_if_asymmetric(self, node, pads, val_name): # pads: SSEE def _pad_if_asymmetric(self, node, pads, val_name): # pads: SSEE
assert len(pads) & 1 == 0 assert len(pads) & 1 == 0
...@@ -418,10 +409,8 @@ class ONNXOpMapper(OpMapper): ...@@ -418,10 +409,8 @@ class ONNXOpMapper(OpMapper):
else: else:
attr['out_shape'] = out_shape attr['out_shape'] = out_shape
node.fluid_code.add_layer(fluid_op, node.fluid_code.add_layer(
inputs=val_x, fluid_op, inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr)
def RoiAlign(self, node): def RoiAlign(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -437,13 +426,12 @@ class ONNXOpMapper(OpMapper): ...@@ -437,13 +426,12 @@ class ONNXOpMapper(OpMapper):
'spatial_scale': spatial_scale, 'spatial_scale': spatial_scale,
'sampling_ratio': sampling_ratio, 'sampling_ratio': sampling_ratio,
} }
node.fluid_code.add_layer('roi_align', node.fluid_code.add_layer(
inputs={ 'roi_align',
'input': val_x, inputs={'input': val_x,
'rois': val_rois 'rois': val_rois},
}, output=node,
output=node, param_attr=attr)
param_attr=attr)
def MaxRoiPool(self, node): def MaxRoiPool(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -456,13 +444,12 @@ class ONNXOpMapper(OpMapper): ...@@ -456,13 +444,12 @@ class ONNXOpMapper(OpMapper):
'pooled_width': pooled_width, 'pooled_width': pooled_width,
'spatial_scale': spatial_scale, 'spatial_scale': spatial_scale,
} }
node.fluid_code.add_layer('roi_pool', node.fluid_code.add_layer(
inputs={ 'roi_pool',
'input': val_x, inputs={'input': val_x,
'rois': val_rois 'rois': val_rois},
}, output=node,
output=node, param_attr=attr)
param_attr=attr)
def Pad(self, node, op_independent=True): def Pad(self, node, op_independent=True):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -499,32 +486,27 @@ class ONNXOpMapper(OpMapper): ...@@ -499,32 +486,27 @@ class ONNXOpMapper(OpMapper):
attr['paddings'] = paddings attr['paddings'] = paddings
if op_independent: if op_independent:
attr['name'] = string(node.layer_name) attr['name'] = string(node.layer_name)
node.fluid_code.add_layer(fluid_op, node.fluid_code.add_layer(
inputs=val_x, fluid_op, inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr)
else: else:
attr['name'] = string(node.layer_name + '_paded') attr['name'] = string(node.layer_name + '_paded')
node.fluid_code.add_layer(fluid_op, node.fluid_code.add_layer(
inputs=val_x, fluid_op,
output=node.layer_name + '_paded', inputs=val_x,
param_attr=attr) output=node.layer_name + '_paded',
param_attr=attr)
return node.layer_name + '_paded' return node.layer_name + '_paded'
def Unsqueeze(self, node): def Unsqueeze(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
axes = node.get_attr('axes') axes = node.get_attr('axes')
if len(val_x.out_shapes[0]) == 0: if len(val_x.out_shapes[0]) == 0:
node.fluid_code.add_layer('assign', node.fluid_code.add_layer(
inputs=val_x, 'assign', inputs=val_x, output=node, param_attr=None)
output=node,
param_attr=None)
else: else:
attr = {'axes': axes, 'name': string(node.layer_name)} attr = {'axes': axes, 'name': string(node.layer_name)}
node.fluid_code.add_layer('unsqueeze', node.fluid_code.add_layer(
inputs=val_x, 'unsqueeze', inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Shrink(self, node): def Shrink(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -532,10 +514,18 @@ class ONNXOpMapper(OpMapper): ...@@ -532,10 +514,18 @@ class ONNXOpMapper(OpMapper):
lambd = node.get_attr('lambd') lambd = node.get_attr('lambd')
assert bias == 0.0, 'not support bias!=0' assert bias == 0.0, 'not support bias!=0'
attr = {'threshold': lambd, 'name': node.layer_name} attr = {'threshold': lambd, 'name': node.layer_name}
node.fluid_code.add_layer('hard_shrink', node.fluid_code.add_layer(
inputs=val_x, 'hard_shrink', inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr) def Greater(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True)
val_y = self.graph.get_input_node(node, idx=1, copy=True)
node.fluid_code.add_layer(
'greater_than',
inputs={'x': val_x,
'y': val_y},
output=node,
param_attr=None)
def Constant(self, node): def Constant(self, node):
val_output = self.graph.get_node(node.layer.output[0], copy=True) val_output = self.graph.get_node(node.layer.output[0], copy=True)
...@@ -552,11 +542,10 @@ class ONNXOpMapper(OpMapper): ...@@ -552,11 +542,10 @@ class ONNXOpMapper(OpMapper):
shape = val_output.out_shapes[0] shape = val_output.out_shapes[0]
if shape is None: if shape is None:
shape = list(value.shape) shape = list(value.shape)
_logger.warning( _logger.warning('in (Constant -> %s): '
'in (Constant -> %s): ' 'attribute "shape" of %s not inferred, '
'attribute "shape" of %s not inferred, ' 'using value as 1-D tensor may lead to fails',
'using value as 1-D tensor may lead to fails', val_output.layer_name, val_output.layer_name)
val_output.layer_name, val_output.layer_name)
if len(value) == 1: if len(value) == 1:
value = value.tolist() value = value.tolist()
...@@ -565,10 +554,8 @@ class ONNXOpMapper(OpMapper): ...@@ -565,10 +554,8 @@ class ONNXOpMapper(OpMapper):
if dtype.name == 'int64': if dtype.name == 'int64':
dtype = 'int32' dtype = 'int32'
attr = {'shape': shape, 'dtype': string(dtype), 'value': value} attr = {'shape': shape, 'dtype': string(dtype), 'value': value}
node.fluid_code.add_layer('fill_constant', node.fluid_code.add_layer(
inputs=None, 'fill_constant', inputs=None, output=node, param_attr=attr)
output=node,
param_attr=attr)
else: else:
value = np.reshape(value, shape) value = np.reshape(value, shape)
self.weights[node.layer_name] = value self.weights[node.layer_name] = value
...@@ -579,10 +566,8 @@ class ONNXOpMapper(OpMapper): ...@@ -579,10 +566,8 @@ class ONNXOpMapper(OpMapper):
'attr': string(node.layer_name), 'attr': string(node.layer_name),
'default_initializer': 'Constant(0.0)' 'default_initializer': 'Constant(0.0)'
} }
node.fluid_code.add_layer("create_parameter", node.fluid_code.add_layer(
inputs=None, "create_parameter", inputs=None, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Resize(self, node): def Resize(self, node):
self._interpolate(node) self._interpolate(node)
...@@ -603,16 +588,15 @@ class ONNXOpMapper(OpMapper): ...@@ -603,16 +588,15 @@ class ONNXOpMapper(OpMapper):
name_ones = node.layer_name + '_ones' name_ones = node.layer_name + '_ones'
attr_ones = {'shape': out_shape, 'dtype': string(val_x_dtype)} attr_ones = {'shape': out_shape, 'dtype': string(val_x_dtype)}
node.fluid_code.add_layer('ones', node.fluid_code.add_layer(
inputs=None, 'ones', inputs=None, output=name_ones, param_attr=attr_ones)
output=name_ones,
param_attr=attr_ones)
inputs = {'x': name_ones, 'y': val_x} inputs = {'x': name_ones, 'y': val_x}
attr = {'name': string(node.layer_name)} attr = {'name': string(node.layer_name)}
node.fluid_code.add_layer('elementwise_mul', node.fluid_code.add_layer(
inputs=inputs, 'elementwise_mul',
output=node.layer_name, inputs=inputs,
param_attr=attr) output=node.layer_name,
param_attr=attr)
def Gather(self, node): def Gather(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -622,72 +606,67 @@ class ONNXOpMapper(OpMapper): ...@@ -622,72 +606,67 @@ class ONNXOpMapper(OpMapper):
assert len( assert len(
indices_shape) <= 2, "Gather op don't support dim of indice >2 " indices_shape) <= 2, "Gather op don't support dim of indice >2 "
if axis == 0 and len(indices_shape) <= 1: if axis == 0 and len(indices_shape) <= 1:
node.fluid_code.add_layer('gather', node.fluid_code.add_layer(
inputs={ 'gather',
'input': val_x, inputs={'input': val_x,
'index': indices 'index': indices},
}, output=node,
output=node, param_attr=None)
param_attr=None)
elif axis > 0 and len(indices_shape) <= 1: elif axis > 0 and len(indices_shape) <= 1:
perm = list(range(len(val_x.out_shapes[0]))) perm = list(range(len(val_x.out_shapes[0])))
perm = [axis] + perm[:axis] + perm[axis + 1:] perm = [axis] + perm[:axis] + perm[axis + 1:]
attr_trans = {'perm': perm} attr_trans = {'perm': perm}
name_trans = val_x.layer_name + '_trans' name_trans = val_x.layer_name + '_trans'
node.fluid_code.add_layer('transpose', node.fluid_code.add_layer(
inputs=val_x, 'transpose',
output=name_trans, inputs=val_x,
param_attr=attr_trans) output=name_trans,
node.fluid_code.add_layer('gather', param_attr=attr_trans)
inputs={ node.fluid_code.add_layer(
'input': name_trans, 'gather',
'index': indices inputs={'input': name_trans,
}, 'index': indices},
output=node, output=node,
param_attr=None) param_attr=None)
node.fluid_code.add_layer('transpose', node.fluid_code.add_layer(
inputs=node, 'transpose', inputs=node, output=node, param_attr=attr_trans)
output=node,
param_attr=attr_trans)
elif len(indices_shape) > 1: elif len(indices_shape) > 1:
from functools import reduce from functools import reduce
reshape_shape = reduce(lambda x, y: x * y, indices_shape) reshape_shape = reduce(lambda x, y: x * y, indices_shape)
node.fluid_code.add_layer('reshape', node.fluid_code.add_layer(
inputs=indices, 'reshape',
output=indices, inputs=indices,
param_attr={'shape': [ output=indices,
reshape_shape, param_attr={'shape': [reshape_shape, ]})
]})
perm = list(range(len(val_x.out_shapes[0]))) perm = list(range(len(val_x.out_shapes[0])))
perm = [axis] + perm[:axis] + perm[axis + 1:] perm = [axis] + perm[:axis] + perm[axis + 1:]
attr_trans = {'perm': perm} attr_trans = {'perm': perm}
name_trans = val_x.layer_name + '_trans' name_trans = val_x.layer_name + '_trans'
node.fluid_code.add_layer('transpose', node.fluid_code.add_layer(
inputs=val_x, 'transpose',
output=name_trans, inputs=val_x,
param_attr=attr_trans) output=name_trans,
node.fluid_code.add_layer('gather', param_attr=attr_trans)
inputs={ node.fluid_code.add_layer(
'input': name_trans, 'gather',
'index': indices inputs={'input': name_trans,
}, 'index': indices},
output=node, output=node,
param_attr=None) param_attr=None)
node.fluid_code.add_layer('transpose', node.fluid_code.add_layer(
inputs=node, 'transpose', inputs=node, output=node, param_attr=attr_trans)
output=node,
param_attr=attr_trans)
val_x_shape = val_x.out_shapes[0] val_x_shape = val_x.out_shapes[0]
reshaped_shape = [] reshaped_shape = []
for i in perm: for i in perm:
reshaped_shape.append(indices_shape[i]) reshaped_shape.append(indices_shape[i])
for i in val_x_shape[:axis] + val_x_shape[axis + 1:]: for i in val_x_shape[:axis] + val_x_shape[axis + 1:]:
reshaped_shape.append(i) reshaped_shape.append(i)
node.fluid_code.add_layer('reshape', node.fluid_code.add_layer(
inputs=node, 'reshape',
output=node, inputs=node,
param_attr={'shape': reshaped_shape}) output=node,
param_attr={'shape': reshaped_shape})
def Slice(self, node): def Slice(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -725,10 +704,8 @@ class ONNXOpMapper(OpMapper): ...@@ -725,10 +704,8 @@ class ONNXOpMapper(OpMapper):
if value > shape[axes[idx]]: if value > shape[axes[idx]]:
ends[idx] = shape[axes[idx]] ends[idx] = shape[axes[idx]]
attr = {"axes": axes, "starts": starts, "ends": ends} attr = {"axes": axes, "starts": starts, "ends": ends}
node.fluid_code.add_layer('slice', node.fluid_code.add_layer(
inputs=val_x, 'slice', inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr)
def ConstantOfShape(self, node): def ConstantOfShape(self, node):
val_shape = self.graph.get_input_node(node, idx=0, copy=True) val_shape = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -751,10 +728,8 @@ class ONNXOpMapper(OpMapper): ...@@ -751,10 +728,8 @@ class ONNXOpMapper(OpMapper):
if dtype.name == 'int64': if dtype.name == 'int64':
dtype = 'int32' dtype = 'int32'
attr = {'shape': shape, 'dtype': string(dtype), 'value': value} attr = {'shape': shape, 'dtype': string(dtype), 'value': value}
node.fluid_code.add_layer('fill_constant', node.fluid_code.add_layer(
inputs=None, 'fill_constant', inputs=None, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Split(self, node): def Split(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -769,10 +744,8 @@ class ONNXOpMapper(OpMapper): ...@@ -769,10 +744,8 @@ class ONNXOpMapper(OpMapper):
'name': string(node.layer_name) 'name': string(node.layer_name)
} }
node.fluid_code.add_layer('split', node.fluid_code.add_layer(
inputs=val_x, 'split', inputs=val_x, output=val_y, param_attr=attr)
output=val_y,
param_attr=attr)
def Reshape(self, node): def Reshape(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -789,10 +762,11 @@ class ONNXOpMapper(OpMapper): ...@@ -789,10 +762,11 @@ class ONNXOpMapper(OpMapper):
shape, _, _ = self.get_dynamic_shape(val_shape.layer_name) shape, _, _ = self.get_dynamic_shape(val_shape.layer_name)
if val_shape.dtype == 'int64': if val_shape.dtype == 'int64':
val_shape_cast = val_shape.layer_name + '_cast' val_shape_cast = val_shape.layer_name + '_cast'
node.fluid_code.add_layer('cast', node.fluid_code.add_layer(
inputs=val_shape, 'cast',
output=val_shape_cast, inputs=val_shape,
param_attr={'dtype': string('int32')}) output=val_shape_cast,
param_attr={'dtype': string('int32')})
attr['actual_shape'] = val_shape_cast attr['actual_shape'] = val_shape_cast
else: else:
...@@ -810,10 +784,8 @@ class ONNXOpMapper(OpMapper): ...@@ -810,10 +784,8 @@ class ONNXOpMapper(OpMapper):
val_x.layer_name, val_reshaped.layer_name) val_x.layer_name, val_reshaped.layer_name)
attr['shape'] = shape attr['shape'] = shape
node.fluid_code.add_layer('reshape', node.fluid_code.add_layer(
inputs=val_x, 'reshape', inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Cast(self, node): def Cast(self, node):
val_input = self.graph.get_input_node(node, idx=0, copy=True) val_input = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -827,10 +799,8 @@ class ONNXOpMapper(OpMapper): ...@@ -827,10 +799,8 @@ class ONNXOpMapper(OpMapper):
if output_dtype: if output_dtype:
assert dtype == output_dtype, 'dtype of to unmatches output' assert dtype == output_dtype, 'dtype of to unmatches output'
attr = {'dtype': string(dtype)} attr = {'dtype': string(dtype)}
node.fluid_code.add_layer('cast', node.fluid_code.add_layer(
inputs=val_input, 'cast', inputs=val_input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def AveragePool(self, node): def AveragePool(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -865,10 +835,8 @@ class ONNXOpMapper(OpMapper): ...@@ -865,10 +835,8 @@ class ONNXOpMapper(OpMapper):
"name": string(node.layer_name) "name": string(node.layer_name)
} }
node.fluid_code.add_layer(fluid_op, node.fluid_code.add_layer(
inputs=val_x, fluid_op, inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Concat(self, node): def Concat(self, node):
inputs = [] inputs = []
...@@ -880,19 +848,15 @@ class ONNXOpMapper(OpMapper): ...@@ -880,19 +848,15 @@ class ONNXOpMapper(OpMapper):
inputs.append(ipt.layer_name) inputs.append(ipt.layer_name)
axis = node.get_attr('axis') axis = node.get_attr('axis')
attr = {'axis': axis} attr = {'axis': axis}
node.fluid_code.add_layer('concat', node.fluid_code.add_layer(
inputs=inputs, 'concat', inputs=inputs, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Flatten(self, node): def Flatten(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
axis = node.get_attr('axis', 1) axis = node.get_attr('axis', 1)
attr = {"axis": str(axis), "name": string(node.layer_name)} attr = {"axis": str(axis), "name": string(node.layer_name)}
node.fluid_code.add_layer('flatten', node.fluid_code.add_layer(
inputs=val_x, 'flatten', inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Gemm(self, node): def Gemm(self, node):
val_a = self.graph.get_input_node(node, idx=0, copy=True) val_a = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -911,39 +875,45 @@ class ONNXOpMapper(OpMapper): ...@@ -911,39 +875,45 @@ class ONNXOpMapper(OpMapper):
"alpha": alpha, "alpha": alpha,
"name": string(val_mm) "name": string(val_mm)
} }
node.fluid_code.add_layer('matmul', node.fluid_code.add_layer(
inputs=matmul_inputs, 'matmul',
output=val_mm, inputs=matmul_inputs,
param_attr=attr_matmul) output=val_mm,
param_attr=attr_matmul)
if beta != 0: if beta != 0:
if beta == 1.: if beta == 1.:
add_inputs = {"x": val_mm, "y": val_c} add_inputs = {"x": val_mm, "y": val_c}
attr = {"name": string(node.layer_name)} attr = {"name": string(node.layer_name)}
node.fluid_code.add_layer("elementwise_add", node.fluid_code.add_layer(
inputs=add_inputs, "elementwise_add",
output=node, inputs=add_inputs,
param_attr=attr) output=node,
param_attr=attr)
else: else:
var_beta = node.layer_name + '_beta' var_beta = node.layer_name + '_beta'
matmul_beta_inputs = {"x": val_c, "y": var_beta} matmul_beta_inputs = {"x": val_c, "y": var_beta}
node.fluid_code.add_layer("Constant", node.fluid_code.add_layer(
inputs=matmul_beta_inputs, "Constant",
output=var_beta, inputs=matmul_beta_inputs,
param_attr={'value': beta}) output=var_beta,
param_attr={'value': beta})
add_inputs = {"x": val_mm, "y": var_beta} add_inputs = {"x": val_mm, "y": var_beta}
attr = {"name": string(node.layer_name)} attr = {"name": string(node.layer_name)}
node.fluid_code.add_layer("elementwise_add", node.fluid_code.add_layer(
inputs=add_inputs, "elementwise_add",
output=node, inputs=add_inputs,
param_attr=attr) output=node,
param_attr=attr)
def Sum(self, node): def Sum(self, node):
val_inps = node.layer.input val_inps = node.layer.input
inputs = { inputs = {
"x": self.graph.get_input_node(node, idx=0, copy=True), "x": self.graph.get_input_node(
"y": self.graph.get_input_node(node, idx=1, copy=True), node, idx=0, copy=True),
"y": self.graph.get_input_node(
node, idx=1, copy=True),
} }
node.fluid_code.add_layer("elementwise_add", inputs=inputs, output=node) node.fluid_code.add_layer("elementwise_add", inputs=inputs, output=node)
...@@ -953,19 +923,16 @@ class ONNXOpMapper(OpMapper): ...@@ -953,19 +923,16 @@ class ONNXOpMapper(OpMapper):
"x": node.layer_name, "x": node.layer_name,
"y": y, "y": y,
} }
node.fluid_code.add_layer("elementwise_add", node.fluid_code.add_layer(
inputs=inputs, "elementwise_add", inputs=inputs, output=node)
output=node)
def MatMul(self, node): def MatMul(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
val_y = self.graph.get_input_node(node, idx=1, copy=True) val_y = self.graph.get_input_node(node, idx=1, copy=True)
inputs = {"x": val_x, "y": val_y} inputs = {"x": val_x, "y": val_y}
attr = {"name": string(node.layer_name)} attr = {"name": string(node.layer_name)}
node.fluid_code.add_layer("matmul", node.fluid_code.add_layer(
inputs=inputs, "matmul", inputs=inputs, output=node, param_attr=attr)
output=node,
param_attr=attr)
def BatchNormalization(self, node): def BatchNormalization(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -996,27 +963,21 @@ class ONNXOpMapper(OpMapper): ...@@ -996,27 +963,21 @@ class ONNXOpMapper(OpMapper):
"use_global_stats": spatial, "use_global_stats": spatial,
"name": string(node.layer_name) "name": string(node.layer_name)
} }
node.fluid_code.add_layer("batch_norm", node.fluid_code.add_layer(
inputs=val_x, "batch_norm", inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Transpose(self, node): def Transpose(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
perm = node.get_attr('perm') perm = node.get_attr('perm')
attr = {'perm': perm, "name": string(node.layer_name)} attr = {'perm': perm, "name": string(node.layer_name)}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=val_x, "transpose", inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Relu(self, node): def Relu(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
attr = {"name": string(node.layer_name)} attr = {"name": string(node.layer_name)}
node.fluid_code.add_layer("relu", node.fluid_code.add_layer(
inputs=val_x, "relu", inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr)
def PRelu(self, node): def PRelu(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -1032,30 +993,25 @@ class ONNXOpMapper(OpMapper): ...@@ -1032,30 +993,25 @@ class ONNXOpMapper(OpMapper):
"param_attr": string(val_slope.layer_name), "param_attr": string(val_slope.layer_name),
'mode': string(mode) 'mode': string(mode)
} }
node.fluid_code.add_layer("prelu", node.fluid_code.add_layer(
inputs=val_x, "prelu", inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Squeeze(self, node): def Squeeze(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
axes = node.get_attr('axes') axes = node.get_attr('axes')
attr = {'axes': axes, "name": string(node.layer_name)} attr = {'axes': axes, "name": string(node.layer_name)}
node.fluid_code.add_layer("squeeze", node.fluid_code.add_layer(
inputs=val_x, "squeeze", inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Equal(self, node): def Equal(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
val_y = self.graph.get_input_node(node, idx=1, copy=True) val_y = self.graph.get_input_node(node, idx=1, copy=True)
node.fluid_code.add_layer("equal", node.fluid_code.add_layer(
inputs={ "equal",
'x': val_x, inputs={'x': val_x,
'y': val_y 'y': val_y},
}, output=node,
output=node, param_attr=None)
param_attr=None)
def Where(self, node): def Where(self, node):
condition = self.graph.get_input_node(node, idx=0, copy=True) condition = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -1063,52 +1019,51 @@ class ONNXOpMapper(OpMapper): ...@@ -1063,52 +1019,51 @@ class ONNXOpMapper(OpMapper):
val_y = self.graph.get_input_node(node, idx=2, copy=True) val_y = self.graph.get_input_node(node, idx=2, copy=True)
not_condition = condition.layer_name + '_not' not_condition = condition.layer_name + '_not'
node.fluid_code.add_layer("logical_not", node.fluid_code.add_layer(
inputs=condition, "logical_not",
output=not_condition, inputs=condition,
param_attr=None) output=not_condition,
param_attr=None)
cast_not_condition = not_condition + '_cast' cast_not_condition = not_condition + '_cast'
node.fluid_code.add_layer("cast", node.fluid_code.add_layer(
inputs=not_condition, "cast",
output=cast_not_condition, inputs=not_condition,
param_attr={'dtype': string(val_x.dtype)}) output=cast_not_condition,
param_attr={'dtype': string(val_x.dtype)})
cast_condition = condition.layer_name + '_cast' cast_condition = condition.layer_name + '_cast'
node.fluid_code.add_layer("cast", node.fluid_code.add_layer(
inputs=condition, "cast",
output=cast_condition, inputs=condition,
param_attr={'dtype': string(val_x.dtype)}) output=cast_condition,
param_attr={'dtype': string(val_x.dtype)})
mul_val_x = val_x.layer_name + '_mul' mul_val_x = val_x.layer_name + '_mul'
node.fluid_code.add_layer("elementwise_mul", node.fluid_code.add_layer(
inputs={ "elementwise_mul",
'x': val_x, inputs={'x': val_x,
'y': cast_condition 'y': cast_condition},
}, output=mul_val_x,
output=mul_val_x, param_attr=None)
param_attr=None)
mul_val_y = val_y.layer_name + '_mul' mul_val_y = val_y.layer_name + '_mul'
node.fluid_code.add_layer("elementwise_mul", node.fluid_code.add_layer(
inputs={ "elementwise_mul",
'x': val_y, inputs={'x': val_y,
'y': cast_not_condition 'y': cast_not_condition},
}, output=mul_val_y,
output=mul_val_y, param_attr=None)
param_attr=None)
node.fluid_code.add_layer(
node.fluid_code.add_layer("elementwise_add", "elementwise_add",
inputs={ inputs={'x': mul_val_x,
'x': mul_val_x, 'y': mul_val_y},
'y': mul_val_y output=node,
}, param_attr=None)
output=node,
param_attr=None)
def NonZero(self, node): def NonZero(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
where_name = node.layer_name + '_where' where_name = node.layer_name + '_where'
node.fluid_code.add_layer("where", node.fluid_code.add_layer(
inputs=val_x.layer_name + '!=0', "where", inputs=val_x.layer_name + '!=0', output=where_name)
output=where_name)
dims = len(val_x.out_shapes[0]) dims = len(val_x.out_shapes[0])
elements_count_val_x = reduce(lambda x, y: x * y, val_x.out_shapes[0]) elements_count_val_x = reduce(lambda x, y: x * y, val_x.out_shapes[0])
flatten_names = [] flatten_names = []
...@@ -1121,18 +1076,16 @@ class ONNXOpMapper(OpMapper): ...@@ -1121,18 +1076,16 @@ class ONNXOpMapper(OpMapper):
'starts': [0, dim], 'starts': [0, dim],
'ends': [elements_count_val_x, dim + 1] 'ends': [elements_count_val_x, dim + 1]
} }
node.fluid_code.add_layer("slice", node.fluid_code.add_layer(
inputs=where_name, "slice", inputs=where_name, output=slice_name, param_attr=attr)
output=slice_name, node.fluid_code.add_layer(
param_attr=attr) "flatten",
node.fluid_code.add_layer("flatten", inputs=slice_name,
inputs=slice_name, output=flatten_name,
output=flatten_name, param_attr={'axis': 0})
param_attr={'axis': 0}) node.fluid_code.add_layer(
node.fluid_code.add_layer("concat", "concat", inputs=flatten_names, output=node,
inputs=flatten_names, param_attr={'axis': 0})
output=node,
param_attr={'axis': 0})
def Identity(self, node): def Identity(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -1151,10 +1104,8 @@ class ONNXOpMapper(OpMapper): ...@@ -1151,10 +1104,8 @@ class ONNXOpMapper(OpMapper):
'expand_times': repeats, 'expand_times': repeats,
"name": string(node.layer_name), "name": string(node.layer_name),
} }
node.fluid_code.add_layer("expand", node.fluid_code.add_layer(
inputs=val_x, "expand", inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr)
def MaxPool(self, node): def MaxPool(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -1191,10 +1142,8 @@ class ONNXOpMapper(OpMapper): ...@@ -1191,10 +1142,8 @@ class ONNXOpMapper(OpMapper):
"name": string(node.layer_name), "name": string(node.layer_name),
"exclusive": False "exclusive": False
} }
node.fluid_code.add_layer(fluid_op, node.fluid_code.add_layer(
inputs=val_x, fluid_op, inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr)
def _global_pool(self, node): def _global_pool(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -1220,10 +1169,8 @@ class ONNXOpMapper(OpMapper): ...@@ -1220,10 +1169,8 @@ class ONNXOpMapper(OpMapper):
"global_pooling": True, "global_pooling": True,
"name": string(node.layer_name) "name": string(node.layer_name)
} }
node.fluid_code.add_layer(fluid_op, node.fluid_code.add_layer(
inputs=val_x, fluid_op, inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr)
def GlobalMaxPool(self, node): def GlobalMaxPool(self, node):
self._global_pool(node) self._global_pool(node)
...@@ -1279,10 +1226,8 @@ class ONNXOpMapper(OpMapper): ...@@ -1279,10 +1226,8 @@ class ONNXOpMapper(OpMapper):
attr["bias_attr"] = string(val_b.layer_name) attr["bias_attr"] = string(val_b.layer_name)
else: else:
attr["bias_attr"] = False attr["bias_attr"] = False
node.fluid_code.add_layer(fluid_op, node.fluid_code.add_layer(
inputs=val_x, fluid_op, inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr)
def ConvTranspose(self, node): def ConvTranspose(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -1314,11 +1259,11 @@ class ONNXOpMapper(OpMapper): ...@@ -1314,11 +1259,11 @@ class ONNXOpMapper(OpMapper):
output_size = [0, 0] output_size = [0, 0]
output_size[0] = (val_x.out_shapes[0][2] - output_size[0] = (val_x.out_shapes[0][2] - 1
1) * strides[0] - 2 * paddings[0] + dilations[0] * ( ) * strides[0] - 2 * paddings[0] + dilations[0] * (
kernel_shape[0] - 1) + 1 + out_padding[0] kernel_shape[0] - 1) + 1 + out_padding[0]
output_size[1] = (val_x.out_shapes[0][3] - output_size[1] = (val_x.out_shapes[0][3] - 1
1) * strides[1] - 2 * paddings[1] + dilations[1] * ( ) * strides[1] - 2 * paddings[1] + dilations[1] * (
kernel_shape[1] - 1) + 1 + out_padding[1] kernel_shape[1] - 1) + 1 + out_padding[1]
attr = { attr = {
'num_filters': num_out_channels, 'num_filters': num_out_channels,
...@@ -1332,10 +1277,8 @@ class ONNXOpMapper(OpMapper): ...@@ -1332,10 +1277,8 @@ class ONNXOpMapper(OpMapper):
'bias_attr': None if val_b is None else string(val_b.layer_name), 'bias_attr': None if val_b is None else string(val_b.layer_name),
'name': string(node.layer_name), 'name': string(node.layer_name),
} }
node.fluid_code.add_layer(fluid_op, node.fluid_code.add_layer(
inputs=val_x, fluid_op, inputs=val_x, output=node, param_attr=attr)
output=node,
param_attr=attr)
def GRU(self, node): def GRU(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True) val_x = self.graph.get_input_node(node, idx=0, copy=True)
...@@ -1352,15 +1295,13 @@ class ONNXOpMapper(OpMapper): ...@@ -1352,15 +1295,13 @@ class ONNXOpMapper(OpMapper):
else: else:
miss_arg_num += 1 miss_arg_num += 1
if num_ipt > 4 and node.layer.input[4] != '': if num_ipt > 4 and node.layer.input[4] != '':
val_len = self.graph.get_input_node(node, val_len = self.graph.get_input_node(
idx=4 - miss_arg_num, node, idx=4 - miss_arg_num, copy=True)
copy=True)
else: else:
miss_arg_num += 1 miss_arg_num += 1
if num_ipt > 5 and node.layer.input[5] != '': if num_ipt > 5 and node.layer.input[5] != '':
val_xh = self.graph.get_input_node(node, val_xh = self.graph.get_input_node(
idx=5 - miss_arg_num, node, idx=5 - miss_arg_num, copy=True)
copy=True)
data, dtype, shape = self.get_dynamic_shape(val_x.layer_name) data, dtype, shape = self.get_dynamic_shape(val_x.layer_name)
...@@ -1401,97 +1342,87 @@ class ONNXOpMapper(OpMapper): ...@@ -1401,97 +1342,87 @@ class ONNXOpMapper(OpMapper):
is_reverse = direction == 'reverse' is_reverse = direction == 'reverse'
var_x0 = node.layer_name + '_x0' var_x0 = node.layer_name + '_x0'
node.fluid_code.add_layer('squeeze', node.fluid_code.add_layer(
inputs=val_x, 'squeeze',
output=var_x0, inputs=val_x,
param_attr={ output=var_x0,
'axes': [1], param_attr={'axes': [1],
'name': string(var_x0) 'name': string(var_x0)})
})
var_w0 = node.layer_name + '_w0' var_w0 = node.layer_name + '_w0'
node.fluid_code.add_layer('squeeze', node.fluid_code.add_layer(
inputs=val_w, 'squeeze',
output=var_w0, inputs=val_w,
param_attr={ output=var_w0,
'axes': [0], param_attr={'axes': [0],
'name': string(var_w0) 'name': string(var_w0)})
})
var_fc = node.layer_name + '_fc' var_fc = node.layer_name + '_fc'
var_mm = (node.layer_name + '_mm') if val_b else var_fc var_mm = (node.layer_name + '_mm') if val_b else var_fc
node.fluid_code.add_layer('matmul', node.fluid_code.add_layer(
inputs={ 'matmul',
'x': var_x0, inputs={'x': var_x0,
'y': var_w0 'y': var_w0},
}, output=var_mm,
output=var_mm, param_attr={
param_attr={ 'transpose_x': 0,
'transpose_x': 0, 'transpose_y': 1,
'transpose_y': 1, 'name': string(var_mm)
'name': string(var_mm) })
})
var_r0 = node.layer_name + '_r0' var_r0 = node.layer_name + '_r0'
node.fluid_code.add_layer('squeeze', node.fluid_code.add_layer(
inputs=val_r, 'squeeze',
output=var_r0, inputs=val_r,
param_attr={ output=var_r0,
'axes': [0], param_attr={'axes': [0],
'name': string(var_r0) 'name': string(var_r0)})
})
var_r0t = node.layer_name + '_r0t' var_r0t = node.layer_name + '_r0t'
node.fluid_code.add_layer('transpose', node.fluid_code.add_layer(
inputs=var_r0, 'transpose',
output=var_r0t, inputs=var_r0,
param_attr={ output=var_r0t,
'perm': [1, 0], param_attr={'perm': [1, 0],
'name': string(var_r0t) 'name': string(var_r0t)})
})
if val_b: if val_b:
var_bi = node.layer_name + '_bi' var_bi = node.layer_name + '_bi'
var_bh = node.layer_name + '_bh' var_bh = node.layer_name + '_bh'
node.fluid_code.add_layer('split', node.fluid_code.add_layer(
inputs=val_b, 'split',
output=var_bi + ',' + var_bh, inputs=val_b,
param_attr={ output=var_bi + ',' + var_bh,
'axis': param_attr={
1, 'axis': 1,
'split': 'split': [hidden_size * 3, hidden_size * 3],
[hidden_size * 3, hidden_size * 3], 'name': string(node.layer_name + '.b/split')
'name': })
string(node.layer_name + '.b/split')
})
var_bi0 = node.layer_name + '_bi0' var_bi0 = node.layer_name + '_bi0'
node.fluid_code.add_layer('squeeze', node.fluid_code.add_layer(
inputs=var_bi, 'squeeze',
output=var_bi0, inputs=var_bi,
param_attr={ output=var_bi0,
'axes': [0], param_attr={'axes': [0],
'name': string(var_bi0) 'name': string(var_bi0)})
})
node.fluid_code.add_layer(
node.fluid_code.add_layer('elmentwise_add', 'elmentwise_add',
inputs=[var_mm, var_bi0], inputs=[var_mm, var_bi0],
output=var_fc, output=var_fc,
param_attr={ param_attr={
'axes': 'axes': 1,
1, 'name': string(node.layer_name + '.i/bias')
'name': })
string(node.layer_name + '.i/bias')
})
if val_xh: if val_xh:
var_xh0 = node.layer_name + '_xh0' var_xh0 = node.layer_name + '_xh0'
node.fluid_code.add_layer('squeeze', node.fluid_code.add_layer(
inputs=val_xh, 'squeeze',
output=var_xh0, inputs=val_xh,
param_attr={ output=var_xh0,
'axes': [1], param_attr={'axes': [1],
'name': string(var_xh0) 'name': string(var_xh0)})
})
var_y00 = node.layer_name + '_y00' var_y00 = node.layer_name + '_y00'
attr = { attr = {
...@@ -1503,26 +1434,29 @@ class ONNXOpMapper(OpMapper): ...@@ -1503,26 +1434,29 @@ class ONNXOpMapper(OpMapper):
'param_attr': string(var_r0t), 'param_attr': string(var_r0t),
'bias_attr': string(var_bh) if val_b else False, 'bias_attr': string(var_bh) if val_b else False,
} }
node.fluid_code.add_layer('dynamic_gru', node.fluid_code.add_layer(
inputs=var_fc + ',' + str(hidden_size), 'dynamic_gru',
output=var_y00, inputs=var_fc + ',' + str(hidden_size),
param_attr=attr) output=var_y00,
param_attr=attr)
num_opt = len(node.layer.output) num_opt = len(node.layer.output)
if num_opt > 0 and node.layer.output[0] != '': if num_opt > 0 and node.layer.output[0] != '':
node.fluid_code.add_layer('unsqueeze', node.fluid_code.add_layer(
inputs=var_y00, 'unsqueeze',
output=node.layer.output[0], inputs=var_y00,
param_attr={ output=node.layer.output[0],
'axes': [1, 1], param_attr={
'name': string(node.layer.output[0]) 'axes': [1, 1],
}) 'name': string(node.layer.output[0])
})
if num_opt > 1 and node.layer.output[1] != '': if num_opt > 1 and node.layer.output[1] != '':
node.fluid_code.add_layer('unsqueeze', node.fluid_code.add_layer(
inputs=var_y00, 'unsqueeze',
output=node.layer.output[1], inputs=var_y00,
param_attr={ output=node.layer.output[1],
'axes': [1, 1], param_attr={
'name': string(node.layer.output[1]) 'axes': [1, 1],
}) 'name': string(node.layer.output[1])
})
import onnx
import numpy as np
from onnx import onnx_pb, helper
im2seq_counter = 0
def im2sequence(op, block):
global im2sequence_counter
n, c, h, w = block.var(op.input('X')[0]).shape
assert h > 0 and w > 0, "Only supported fixed input shape for im2sequence operator."
stride_h, stride_w = op.attr('strides')
paddings = op.attr('paddings')
assert op.attr(
'out_stride'
) != 1, "Only out_stride==1 is supported for im2sequence operator."
h = h + paddings[0] + paddings[1]
w = w + paddings[1] + paddings[2]
kernel_h, kernel_w = op.attr('kernels')
out_h = 1 + (h - kernel_h + stride_h - 1) // stride_h
out_w = 1 + (w - kernel_w + stride_w - 1) // stride_w
h_steps = list()
for i in range(out_h):
h_steps.append([i * stride_h, i * stride_h + kernel_h])
w_steps = list()
for i in range(out_w):
w_steps.append([i * stride_w, i * stride_w + kernel_w])
nodes = list()
slice_blocks = list()
for i in range(out_h):
for j in range(out_w):
starts_name = "im2sequence.starts.{}.{}.{}".format(im2seq_counter,
i, j)
starts_tensor = helper.make_tensor(
name=starts_name,
data_type=onnx_pb.TensorProto.INT64,
dims=[4],
vals=[0, 0, h_steps[i][0], w_steps[j][0]])
ends_name = "im2sequence.ends.{}.{}.{}".format(im2seq_counter, i, j)
ends_tensor = helper.make_tensor(
name=ends_name,
data_type=onnx_pb.TensorProto.INT64,
dims=[4],
vals=[999999, 999999, h_steps[i][1], w_steps[j][1]])
starts_node = helper.make_node(
'Constant',
inputs=[],
outputs=[starts_name],
value=starts_tensor)
ends_node = helper.make_node(
'Constant', inputs=[], outputs=[ends_name], value=ends_tensor)
nodes.extend([starts_node, ends_node])
slice_block_name = "im2sequence.slice.{}.{}.{}".format(
im2seq_counter, i, j)
slice_block_node = helper.make_node(
'Slice',
inputs=[op.input('X')[0], starts_name, ends_name],
outputs=[slice_block_name])
flatten_block_name = "im2sequence.flatten.{}.{}.{}".format(
im2seq_counter, i, j)
flatten_block_node = helper.make_node(
"Flatten",
inputs=[slice_block_name],
outputs=[flatten_block_name],
axis=0)
nodes.extend([slice_block_node, flatten_block_node])
slice_blocks.append(flatten_block_name)
concat_block_name = "im2sequence.concat_block.{}".format(im2seq_counter)
# concat_block_node = helper.make_node("Concat", inputs=slice_blocks, outputs=[concat_block_name], axis=0)
concat_block_node = helper.make_node(
"Concat", inputs=slice_blocks, outputs=op.output('Out'), axis=0)
nodes.append(concat_block_node)
print("\n\n==========Importance Notice===========")
print(
"Since im2sequence operator is used in your paddlepaddle model, the translated onnx model only support input data with batch_size=1."
)
print("======================================\n")
return nodes
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
import sys
import os
import numpy as np
import paddle.fluid.core as core
import paddle.fluid as fluid
import onnx
import warnings
from onnx import helper, onnx_pb
def multiclass_nms(op, block):
"""
Convert the paddle multiclass_nms to onnx op.
This op is get the select boxes from origin boxes.
"""
inputs = dict()
outputs = dict()
attrs = dict()
for name in op.input_names:
inputs[name] = op.input(name)
for name in op.output_names:
outputs[name] = op.output(name)
for name in op.attr_names:
attrs[name] = op.attr(name)
result_name = outputs['Out'][0]
background = attrs['background_label']
normalized = attrs['normalized']
if normalized == False:
warnings.warn(
'The parameter normalized of multiclass_nms OP of Paddle is False, which has diff with ONNX. \
Please set normalized=True in multiclass_nms of Paddle')
#convert the paddle attribute to onnx tensor
name_score_threshold = [outputs['Out'][0] + "@score_threshold"]
name_iou_threshold = [outputs['Out'][0] + "@iou_threshold"]
name_keep_top_k = [outputs['Out'][0] + '@keep_top_k']
name_keep_top_k_2D = [outputs['Out'][0] + '@keep_top_k_1D']
node_score_threshold = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_score_threshold,
value=onnx.helper.make_tensor(
name=name_score_threshold[0] + "@const",
data_type=onnx.TensorProto.FLOAT,
dims=(),
vals=[float(attrs['score_threshold'])]))
node_iou_threshold = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_iou_threshold,
value=onnx.helper.make_tensor(
name=name_iou_threshold[0] + "@const",
data_type=onnx.TensorProto.FLOAT,
dims=(),
vals=[float(attrs['nms_threshold'])]))
node_keep_top_k = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_keep_top_k,
value=onnx.helper.make_tensor(
name=name_keep_top_k[0] + "@const",
data_type=onnx.TensorProto.INT64,
dims=(),
vals=[np.int64(attrs['keep_top_k'])]))
node_keep_top_k_2D = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_keep_top_k_2D,
value=onnx.helper.make_tensor(
name=name_keep_top_k_2D[0] + "@const",
data_type=onnx.TensorProto.INT64,
dims=[1, 1],
vals=[np.int64(attrs['keep_top_k'])]))
# the paddle data format is x1,y1,x2,y2
kwargs = {'center_point_box': 0}
name_select_nms = [outputs['Out'][0] + "@select_index"]
node_select_nms= onnx.helper.make_node(
'NonMaxSuppression',
inputs=inputs['BBoxes'] + inputs['Scores'] + name_keep_top_k +\
name_iou_threshold + name_score_threshold,
outputs=name_select_nms)
# step 1 nodes select the nms class
node_list = [
node_score_threshold, node_iou_threshold, node_keep_top_k,
node_keep_top_k_2D, node_select_nms
]
# create some const value to use
name_const_value = [result_name+"@const_0",
result_name+"@const_1",\
result_name+"@const_2",\
result_name+"@const_-1"]
value_const_value = [0, 1, 2, -1]
for name, value in zip(name_const_value, value_const_value):
node = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=[name],
value=onnx.helper.make_tensor(
name=name + "@const",
data_type=onnx.TensorProto.INT64,
dims=[1],
vals=[value]))
node_list.append(node)
# Ine this code block, we will deocde the raw score data, reshape N * C * M to 1 * N*C*M
# and the same time, decode the select indices to 1 * D, gather the select_indices
outputs_gather_1 = [result_name + "@gather_1"]
node_gather_1 = onnx.helper.make_node(
'Gather',
inputs=name_select_nms + [result_name + "@const_1"],
outputs=outputs_gather_1,
axis=1)
node_list.append(node_gather_1)
outputs_squeeze_gather_1 = [result_name + "@sequeeze_gather_1"]
node_squeeze_gather_1 = onnx.helper.make_node(
'Squeeze',
inputs=outputs_gather_1,
outputs=outputs_squeeze_gather_1,
axes=[1])
node_list.append(node_squeeze_gather_1)
outputs_gather_2 = [result_name + "@gather_2"]
node_gather_2 = onnx.helper.make_node(
'Gather',
inputs=name_select_nms + [result_name + "@const_2"],
outputs=outputs_gather_2,
axis=1)
node_list.append(node_gather_2)
#slice the class is not 0
if background == 0:
outputs_nonzero = [result_name + "@nonzero"]
node_nonzero = onnx.helper.make_node(
'NonZero', inputs=outputs_squeeze_gather_1, outputs=outputs_nonzero)
node_list.append(node_nonzero)
else:
name_thresh = [result_name + "@thresh"]
node_thresh = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_thresh,
value=onnx.helper.make_tensor(
name=name_thresh[0] + "@const",
data_type=onnx.TensorProto.INT32,
dims=[1],
vals=[-1]))
node_list.append(node_thresh)
outputs_cast = [result_name + "@cast"]
node_cast = onnx.helper.make_node(
'Cast', inputs=outputs_squeeze_gather_1, outputs=outputs_cast, to=6)
node_list.append(node_cast)
outputs_greater = [result_name + "@greater"]
node_greater = onnx.helper.make_node(
'Greater',
inputs=outputs_cast + name_thresh,
outputs=outputs_greater)
node_list.append(node_greater)
outputs_nonzero = [result_name + "@nonzero"]
node_nonzero = onnx.helper.make_node(
'NonZero', inputs=outputs_greater, outputs=outputs_nonzero)
node_list.append(node_nonzero)
outputs_gather_1_nonzero = [result_name + "@gather_1_nonzero"]
node_gather_1_nonzero = onnx.helper.make_node(
'Gather',
inputs=outputs_gather_1 + outputs_nonzero,
outputs=outputs_gather_1_nonzero,
axis=0)
node_list.append(node_gather_1_nonzero)
outputs_gather_2_nonzero = [result_name + "@gather_2_nonzero"]
node_gather_2_nonzero = onnx.helper.make_node(
'Gather',
inputs=outputs_gather_2 + outputs_nonzero,
outputs=outputs_gather_2_nonzero,
axis=0)
node_list.append(node_gather_2_nonzero)
# reshape scores N * C * M to (N*C*M) * 1
outputs_reshape_scores_rank1 = [result_name + "@reshape_scores_rank1"]
node_reshape_scores_rank1 = onnx.helper.make_node(
"Reshape",
inputs=inputs['Scores'] + [result_name + "@const_-1"],
outputs=outputs_reshape_scores_rank1)
node_list.append(node_reshape_scores_rank1)
# get the shape of scores
outputs_shape_scores = [result_name + "@shape_scores"]
node_shape_scores = onnx.helper.make_node(
'Shape', inputs=inputs['Scores'], outputs=outputs_shape_scores)
node_list.append(node_shape_scores)
# gather the index: 2 shape of scores
outputs_gather_scores_dim1 = [result_name + "@gather_scores_dim1"]
node_gather_scores_dim1 = onnx.helper.make_node(
'Gather',
inputs=outputs_shape_scores + [result_name + "@const_2"],
outputs=outputs_gather_scores_dim1,
axis=0)
node_list.append(node_gather_scores_dim1)
# mul class * M
outputs_mul_classnum_boxnum = [result_name + "@mul_classnum_boxnum"]
node_mul_classnum_boxnum = onnx.helper.make_node(
'Mul',
inputs=outputs_gather_1_nonzero + outputs_gather_scores_dim1,
outputs=outputs_mul_classnum_boxnum)
node_list.append(node_mul_classnum_boxnum)
# add class * M * index
outputs_add_class_M_index = [result_name + "@add_class_M_index"]
node_add_class_M_index = onnx.helper.make_node(
'Add',
inputs=outputs_mul_classnum_boxnum + outputs_gather_2_nonzero,
outputs=outputs_add_class_M_index)
node_list.append(node_add_class_M_index)
# Squeeze the indices to 1 dim
outputs_squeeze_select_index = [result_name + "@squeeze_select_index"]
node_squeeze_select_index = onnx.helper.make_node(
'Squeeze',
inputs=outputs_add_class_M_index,
outputs=outputs_squeeze_select_index,
axes=[0, 2])
node_list.append(node_squeeze_select_index)
# gather the data from flatten scores
outputs_gather_select_scores = [result_name + "@gather_select_scores"]
node_gather_select_scores = onnx.helper.make_node('Gather',
inputs=outputs_reshape_scores_rank1 + \
outputs_squeeze_select_index,
outputs=outputs_gather_select_scores,
axis=0)
node_list.append(node_gather_select_scores)
# get nums to input TopK
outputs_shape_select_num = [result_name + "@shape_select_num"]
node_shape_select_num = onnx.helper.make_node(
'Shape',
inputs=outputs_gather_select_scores,
outputs=outputs_shape_select_num)
node_list.append(node_shape_select_num)
outputs_gather_select_num = [result_name + "@gather_select_num"]
node_gather_select_num = onnx.helper.make_node(
'Gather',
inputs=outputs_shape_select_num + [result_name + "@const_0"],
outputs=outputs_gather_select_num,
axis=0)
node_list.append(node_gather_select_num)
outputs_unsqueeze_select_num = [result_name + "@unsqueeze_select_num"]
node_unsqueeze_select_num = onnx.helper.make_node(
'Unsqueeze',
inputs=outputs_gather_select_num,
outputs=outputs_unsqueeze_select_num,
axes=[0])
node_list.append(node_unsqueeze_select_num)
outputs_concat_topK_select_num = [result_name + "@conat_topK_select_num"]
node_conat_topK_select_num = onnx.helper.make_node(
'Concat',
inputs=outputs_unsqueeze_select_num + name_keep_top_k_2D,
outputs=outputs_concat_topK_select_num,
axis=0)
node_list.append(node_conat_topK_select_num)
outputs_cast_concat_topK_select_num = [
result_name + "@concat_topK_select_num"
]
node_outputs_cast_concat_topK_select_num = onnx.helper.make_node(
'Cast',
inputs=outputs_concat_topK_select_num,
outputs=outputs_cast_concat_topK_select_num,
to=6)
node_list.append(node_outputs_cast_concat_topK_select_num)
# get min(topK, num_select)
outputs_compare_topk_num_select = [result_name + "@compare_topk_num_select"]
node_compare_topk_num_select = onnx.helper.make_node(
'ReduceMin',
inputs=outputs_cast_concat_topK_select_num,
outputs=outputs_compare_topk_num_select,
keepdims=0)
node_list.append(node_compare_topk_num_select)
# unsqueeze the indices to 1D tensor
outputs_unsqueeze_topk_select_indices = [
result_name + "@unsqueeze_topk_select_indices"
]
node_unsqueeze_topk_select_indices = onnx.helper.make_node(
'Unsqueeze',
inputs=outputs_compare_topk_num_select,
outputs=outputs_unsqueeze_topk_select_indices,
axes=[0])
node_list.append(node_unsqueeze_topk_select_indices)
# cast the indices to INT64
outputs_cast_topk_indices = [result_name + "@cast_topk_indices"]
node_cast_topk_indices = onnx.helper.make_node(
'Cast',
inputs=outputs_unsqueeze_topk_select_indices,
outputs=outputs_cast_topk_indices,
to=7)
node_list.append(node_cast_topk_indices)
# select topk scores indices
outputs_topk_select_topk_indices = [result_name + "@topk_select_topk_values",\
result_name + "@topk_select_topk_indices"]
node_topk_select_topk_indices = onnx.helper.make_node(
'TopK',
inputs=outputs_gather_select_scores + outputs_cast_topk_indices,
outputs=outputs_topk_select_topk_indices)
node_list.append(node_topk_select_topk_indices)
# gather topk label, scores, boxes
outputs_gather_topk_scores = [result_name + "@gather_topk_scores"]
node_gather_topk_scores = onnx.helper.make_node(
'Gather',
inputs=outputs_gather_select_scores +
[outputs_topk_select_topk_indices[1]],
outputs=outputs_gather_topk_scores,
axis=0)
node_list.append(node_gather_topk_scores)
outputs_gather_topk_class = [result_name + "@gather_topk_class"]
node_gather_topk_class = onnx.helper.make_node(
'Gather',
inputs=outputs_gather_1_nonzero +
[outputs_topk_select_topk_indices[1]],
outputs=outputs_gather_topk_class,
axis=1)
node_list.append(node_gather_topk_class)
# gather the boxes need to gather the boxes id, then get boxes
outputs_gather_topk_boxes_id = [result_name + "@gather_topk_boxes_id"]
node_gather_topk_boxes_id = onnx.helper.make_node(
'Gather',
inputs=outputs_gather_2_nonzero +
[outputs_topk_select_topk_indices[1]],
outputs=outputs_gather_topk_boxes_id,
axis=1)
node_list.append(node_gather_topk_boxes_id)
# squeeze the gather_topk_boxes_id to 1 dim
outputs_squeeze_topk_boxes_id = [result_name + "@squeeze_topk_boxes_id"]
node_squeeze_topk_boxes_id = onnx.helper.make_node(
'Squeeze',
inputs=outputs_gather_topk_boxes_id,
outputs=outputs_squeeze_topk_boxes_id,
axes=[0, 2])
node_list.append(node_squeeze_topk_boxes_id)
outputs_gather_select_boxes = [result_name + "@gather_select_boxes"]
node_gather_select_boxes = onnx.helper.make_node(
'Gather',
inputs=inputs['BBoxes'] + outputs_squeeze_topk_boxes_id,
outputs=outputs_gather_select_boxes,
axis=1)
node_list.append(node_gather_select_boxes)
# concat the final result
# before concat need to cast the class to float
outputs_cast_topk_class = [result_name + "@cast_topk_class"]
node_cast_topk_class = onnx.helper.make_node(
'Cast',
inputs=outputs_gather_topk_class,
outputs=outputs_cast_topk_class,
to=1)
node_list.append(node_cast_topk_class)
outputs_unsqueeze_topk_scores = [result_name + "@unsqueeze_topk_scores"]
node_unsqueeze_topk_scores = onnx.helper.make_node(
'Unsqueeze',
inputs=outputs_gather_topk_scores,
outputs=outputs_unsqueeze_topk_scores,
axes=[0, 2])
node_list.append(node_unsqueeze_topk_scores)
inputs_concat_final_results = outputs_cast_topk_class + outputs_unsqueeze_topk_scores +\
outputs_gather_select_boxes
outputs_concat_final_results = outputs['Out']
node_concat_final_results = onnx.helper.make_node(
'Concat',
inputs=inputs_concat_final_results,
outputs=outputs_concat_final_results,
axis=2)
node_list.append(node_concat_final_results)
return node_list
import onnx
import numpy as np
from onnx import onnx_pb, helper
def get_old_name(arg, name_prefix=''):
prefix_index = arg.find(name_prefix)
if prefix_index != -1:
last_prefix = arg[len(name_prefix):]
else:
last_prefix = arg
idx = last_prefix.find('@')
if idx != -1:
last_prefix = last_prefix[:idx]
return name_prefix + last_prefix
def yolo_box(op, block):
inputs = dict()
outputs = dict()
attrs = dict()
for name in op.input_names:
inputs[name] = op.input(name)
for name in op.output_names:
outputs[name] = op.output(name)
for name in op.attr_names:
attrs[name] = op.attr(name)
model_name = outputs['Boxes'][0]
input_shape = block.vars[get_old_name(inputs['X'][0])].shape
image_size = inputs['ImgSize']
input_height = input_shape[2]
input_width = input_shape[3]
class_num = attrs['class_num']
anchors = attrs['anchors']
num_anchors = int(len(anchors)) // 2
downsample_ratio = attrs['downsample_ratio']
input_size = input_height * downsample_ratio
conf_thresh = attrs['conf_thresh']
conf_thresh_mat = np.ones([num_anchors * input_height *
input_width]) * conf_thresh
node_list = []
im_outputs = []
x_shape = [1, num_anchors, 5 + class_num, input_height, input_width]
name_x_shape = [model_name + "@x_shape"]
node_x_shape = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_x_shape,
value=onnx.helper.make_tensor(
name=name_x_shape[0] + "@const",
data_type=onnx.TensorProto.INT64,
dims=[5],
vals=x_shape))
node_list.append(node_x_shape)
outputs_x_reshape = [model_name + "@reshape"]
node_x_reshape = onnx.helper.make_node(
'Reshape', inputs=inputs['X'] + name_x_shape, outputs=outputs_x_reshape)
node_list.append(node_x_reshape)
outputs_x_transpose = [model_name + "@x_transpose"]
node_x_transpose = onnx.helper.make_node(
'Transpose',
inputs=outputs_x_reshape,
outputs=outputs_x_transpose,
perm=[0, 1, 3, 4, 2])
node_list.append(node_x_transpose)
range_x = []
range_y = []
for i in range(0, input_width):
range_x.append(i)
for j in range(0, input_height):
range_y.append(j)
name_range_x = [model_name + "@range_x"]
node_range_x = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_range_x,
value=onnx.helper.make_tensor(
name=name_range_x[0] + "@const",
data_type=onnx.TensorProto.FLOAT,
dims=[input_width],
vals=range_x))
node_list.append(node_range_x)
name_range_y = [model_name + "@range_y"]
node_range_y = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_range_y,
value=onnx.helper.make_tensor(
name=name_range_y[0] + "@const",
data_type=onnx.TensorProto.FLOAT,
dims=[input_height],
vals=range_y))
node_list.append(node_range_y)
range_x_new_shape = [1, input_width]
range_y_new_shape = [input_height, 1]
name_range_x_new_shape = [model_name + "@range_x_new_shape"]
node_range_x_new_shape = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_range_x_new_shape,
value=onnx.helper.make_tensor(
name=name_range_x_new_shape[0] + "@const",
data_type=onnx.TensorProto.INT64,
dims=[len(range_x_new_shape)],
vals=range_x_new_shape))
node_list.append(node_range_x_new_shape)
name_range_y_new_shape = [model_name + "@range_y_new_shape"]
node_range_y_new_shape = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_range_y_new_shape,
value=onnx.helper.make_tensor(
name=name_range_y_new_shape[0] + "@const",
data_type=onnx.TensorProto.INT64,
dims=[len(range_y_new_shape)],
vals=range_y_new_shape))
node_list.append(node_range_y_new_shape)
outputs_range_x_reshape = [model_name + "@range_x_reshape"]
node_range_x_reshape = onnx.helper.make_node(
'Reshape',
inputs=name_range_x + name_range_x_new_shape,
outputs=outputs_range_x_reshape)
node_list.append(node_range_x_reshape)
outputs_range_y_reshape = [model_name + "@range_y_reshape"]
node_range_y_reshape = onnx.helper.make_node(
'Reshape',
inputs=name_range_y + name_range_y_new_shape,
outputs=outputs_range_y_reshape)
node_list.append(node_range_y_reshape)
outputs_grid_x = [model_name + "@grid_x"]
node_grid_x = onnx.helper.make_node(
"Tile",
inputs=outputs_range_x_reshape + name_range_y_new_shape,
outputs=outputs_grid_x)
node_list.append(node_grid_x)
outputs_grid_y = [model_name + "@grid_y"]
node_grid_y = onnx.helper.make_node(
"Tile",
inputs=outputs_range_y_reshape + name_range_x_new_shape,
outputs=outputs_grid_y)
node_list.append(node_grid_y)
outputs_box_x = [model_name + "@box_x"]
outputs_box_y = [model_name + "@box_y"]
outputs_box_w = [model_name + "@box_w"]
outputs_box_h = [model_name + "@box_h"]
outputs_conf = [model_name + "@conf"]
outputs_prob = [model_name + "@prob"]
node_split_input = onnx.helper.make_node(
"Split",
inputs=outputs_x_transpose,
outputs=outputs_box_x + outputs_box_y + outputs_box_w\
+ outputs_box_h + outputs_conf + outputs_prob,
axis=-1,
split=[1, 1, 1, 1, 1, class_num])
node_list.append(node_split_input)
outputs_box_x_sigmoid = [model_name + "@box_x_sigmoid"]
outputs_box_y_sigmoid = [model_name + "@box_y_sigmoid"]
node_box_x_sigmoid = onnx.helper.make_node(
"Sigmoid", inputs=outputs_box_x, outputs=outputs_box_x_sigmoid)
node_list.append(node_box_x_sigmoid)
node_box_y_sigmoid = onnx.helper.make_node(
"Sigmoid", inputs=outputs_box_y, outputs=outputs_box_y_sigmoid)
node_list.append(node_box_y_sigmoid)
outputs_box_x_squeeze = [model_name + "@box_x_squeeze"]
outputs_box_y_squeeze = [model_name + "@box_y_squeeze"]
node_box_x_squeeze = onnx.helper.make_node(
'Squeeze',
inputs=outputs_box_x_sigmoid,
outputs=outputs_box_x_squeeze,
axes=[4])
node_list.append(node_box_x_squeeze)
node_box_y_squeeze = onnx.helper.make_node(
'Squeeze',
inputs=outputs_box_y_sigmoid,
outputs=outputs_box_y_squeeze,
axes=[4])
node_list.append(node_box_y_squeeze)
outputs_box_x_add_grid = [model_name + "@box_x_add_grid"]
outputs_box_y_add_grid = [model_name + "@box_y_add_grid"]
node_box_x_add_grid = onnx.helper.make_node(
"Add",
inputs=outputs_grid_x + outputs_box_x_squeeze,
outputs=outputs_box_x_add_grid)
node_list.append(node_box_x_add_grid)
node_box_y_add_grid = onnx.helper.make_node(
"Add",
inputs=outputs_grid_y + outputs_box_y_squeeze,
outputs=outputs_box_y_add_grid)
node_list.append(node_box_y_add_grid)
name_input_h = [model_name + "@input_h"]
name_input_w = [model_name + "@input_w"]
node_input_h = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_input_h,
value=onnx.helper.make_tensor(
name=name_input_w[0] + "@const",
data_type=onnx.TensorProto.FLOAT,
dims=(),
vals=[input_height]))
node_list.append(node_input_h)
node_input_w = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_input_w,
value=onnx.helper.make_tensor(
name=name_input_w[0] + "@const",
data_type=onnx.TensorProto.FLOAT,
dims=(),
vals=[input_width]))
node_list.append(node_input_w)
outputs_box_x_encode = [model_name + "@box_x_encode"]
outputs_box_y_encode = [model_name + "@box_y_encode"]
node_box_x_encode = onnx.helper.make_node(
'Div',
inputs=outputs_box_x_add_grid + name_input_w,
outputs=outputs_box_x_encode)
node_list.append(node_box_x_encode)
node_box_y_encode = onnx.helper.make_node(
'Div',
inputs=outputs_box_y_add_grid + name_input_h,
outputs=outputs_box_y_encode)
node_list.append(node_box_y_encode)
name_anchor_tensor = [model_name + "@anchor_tensor"]
node_anchor_tensor = onnx.helper.make_node(
"Constant",
inputs=[],
outputs=name_anchor_tensor,
value=onnx.helper.make_tensor(
name=name_anchor_tensor[0] + "@const",
data_type=onnx.TensorProto.FLOAT,
dims=[len(anchors)],
vals=anchors))
node_list.append(node_anchor_tensor)
anchor_shape = [int(num_anchors), 2]
name_anchor_shape = [model_name + "@anchor_shape"]
node_anchor_shape = onnx.helper.make_node(
"Constant",
inputs=[],
outputs=name_anchor_shape,
value=onnx.helper.make_tensor(
name=name_anchor_shape[0] + "@const",
data_type=onnx.TensorProto.INT64,
dims=[2],
vals=anchor_shape))
node_list.append(node_anchor_shape)
outputs_anchor_tensor_reshape = [model_name + "@anchor_tensor_reshape"]
node_anchor_tensor_reshape = onnx.helper.make_node(
"Reshape",
inputs=name_anchor_tensor + name_anchor_shape,
outputs=outputs_anchor_tensor_reshape)
node_list.append(node_anchor_tensor_reshape)
name_input_size = [model_name + "@input_size"]
node_input_size = onnx.helper.make_node(
"Constant",
inputs=[],
outputs=name_input_size,
value=onnx.helper.make_tensor(
name=name_input_size[0] + "@const",
data_type=onnx.TensorProto.FLOAT,
dims=(),
vals=[input_size]))
node_list.append(node_input_size)
outputs_anchors_div_input_size = [model_name + "@anchors_div_input_size"]
node_anchors_div_input_size = onnx.helper.make_node(
"Div",
inputs=outputs_anchor_tensor_reshape + name_input_size,
outputs=outputs_anchors_div_input_size)
node_list.append(node_anchors_div_input_size)
outputs_anchor_w = [model_name + "@anchor_w"]
outputs_anchor_h = [model_name + "@anchor_h"]
node_anchor_split = onnx.helper.make_node(
'Split',
inputs=outputs_anchors_div_input_size,
outputs=outputs_anchor_w + outputs_anchor_h,
axis=1,
split=[1, 1])
node_list.append(node_anchor_split)
new_anchor_shape = [1, int(num_anchors), 1, 1]
name_new_anchor_shape = [model_name + "@new_anchor_shape"]
node_new_anchor_shape = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_new_anchor_shape,
value=onnx.helper.make_tensor(
name=name_new_anchor_shape[0] + "@const",
data_type=onnx.TensorProto.INT64,
dims=[len(new_anchor_shape)],
vals=new_anchor_shape))
node_list.append(node_new_anchor_shape)
outputs_anchor_w_reshape = [model_name + "@anchor_w_reshape"]
outputs_anchor_h_reshape = [model_name + "@anchor_h_reshape"]
node_anchor_w_reshape = onnx.helper.make_node(
'Reshape',
inputs=outputs_anchor_w + name_new_anchor_shape,
outputs=outputs_anchor_w_reshape)
node_list.append(node_anchor_w_reshape)
node_anchor_h_reshape = onnx.helper.make_node(
'Reshape',
inputs=outputs_anchor_h + name_new_anchor_shape,
outputs=outputs_anchor_h_reshape)
node_list.append(node_anchor_h_reshape)
outputs_box_w_squeeze = [model_name + "@box_w_squeeze"]
node_box_w_squeeze = onnx.helper.make_node(
'Squeeze',
inputs=outputs_box_w,
outputs=outputs_box_w_squeeze,
axes=[4])
node_list.append(node_box_w_squeeze)
outputs_box_h_squeeze = [model_name + "@box_h_squeeze"]
node_box_h_squeeze = onnx.helper.make_node(
'Squeeze',
inputs=outputs_box_h,
outputs=outputs_box_h_squeeze,
axes=[4])
node_list.append(node_box_h_squeeze)
outputs_box_w_exp = [model_name + "@box_w_exp"]
node_box_w_exp = onnx.helper.make_node(
"Exp", inputs=outputs_box_w_squeeze, outputs=outputs_box_w_exp)
node_list.append(node_box_w_exp)
outputs_box_h_exp = [model_name + "@box_h_exp"]
node_box_h_exp = onnx.helper.make_node(
"Exp", inputs=outputs_box_h_squeeze, outputs=outputs_box_h_exp)
node_list.append(node_box_h_exp)
outputs_box_w_encode = [model_name + "box_w_encode"]
outputs_box_h_encode = [model_name + "box_h_encode"]
node_box_w_encode = onnx.helper.make_node(
'Mul',
inputs=outputs_box_w_exp + outputs_anchor_w_reshape,
outputs=outputs_box_w_encode)
node_list.append(node_box_w_encode)
node_box_h_encode = onnx.helper.make_node(
'Mul',
inputs=outputs_box_h_exp + outputs_anchor_h_reshape,
outputs=outputs_box_h_encode)
node_list.append(node_box_h_encode)
outputs_conf_sigmoid = [model_name + "@conf_sigmoid"]
node_conf_sigmoid = onnx.helper.make_node(
'Sigmoid', inputs=outputs_conf, outputs=outputs_conf_sigmoid)
node_list.append(node_conf_sigmoid)
name_conf_thresh = [model_name + "@conf_thresh"]
node_conf_thresh = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_conf_thresh,
value=onnx.helper.make_tensor(
name=name_conf_thresh[0] + "@const",
data_type=onnx.TensorProto.FLOAT,
dims=[num_anchors * input_height * input_width],
vals=conf_thresh_mat))
node_list.append(node_conf_thresh)
conf_shape = [1, int(num_anchors), input_height, input_width, 1]
name_conf_shape = [model_name + "@conf_shape"]
node_conf_shape = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_conf_shape,
value=onnx.helper.make_tensor(
name=name_conf_shape[0] + "@const",
data_type=onnx.TensorProto.INT64,
dims=[len(conf_shape)],
vals=conf_shape))
node_list.append(node_conf_shape)
outputs_conf_thresh_reshape = [model_name + "@conf_thresh_reshape"]
node_conf_thresh_reshape = onnx.helper.make_node(
'Reshape',
inputs=name_conf_thresh + name_conf_shape,
outputs=outputs_conf_thresh_reshape)
node_list.append(node_conf_thresh_reshape)
outputs_conf_sub = [model_name + "@conf_sub"]
node_conf_sub = onnx.helper.make_node(
'Sub',
inputs=outputs_conf_sigmoid + outputs_conf_thresh_reshape,
outputs=outputs_conf_sub)
node_list.append(node_conf_sub)
outputs_conf_clip = [model_name + "@conf_clip"]
node_conf_clip = onnx.helper.make_node(
'Clip', inputs=outputs_conf_sub, outputs=outputs_conf_clip)
node_list.append(node_conf_clip)
zeros = [0]
name_zeros = [model_name + "@zeros"]
node_zeros = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_zeros,
value=onnx.helper.make_tensor(
name=name_zeros[0] + "@const",
data_type=onnx.TensorProto.FLOAT,
dims=(),
vals=zeros))
node_list.append(node_zeros)
outputs_conf_clip_bool = [model_name + "@conf_clip_bool"]
node_conf_clip_bool = onnx.helper.make_node(
'Greater',
inputs=outputs_conf_clip + name_zeros,
outputs=outputs_conf_clip_bool)
node_list.append(node_conf_clip_bool)
outputs_conf_clip_cast = [model_name + "@conf_clip_cast"]
node_conf_clip_cast = onnx.helper.make_node(
'Cast',
inputs=outputs_conf_clip_bool,
outputs=outputs_conf_clip_cast,
to=1)
node_list.append(node_conf_clip_cast)
outputs_conf_set_zero = [model_name + "@conf_set_zero"]
node_conf_set_zero = onnx.helper.make_node(
'Mul',
inputs=outputs_conf_sigmoid + outputs_conf_clip_cast,
outputs=outputs_conf_set_zero)
node_list.append(node_conf_set_zero)
outputs_prob_sigmoid = [model_name + "@prob_sigmoid"]
node_prob_sigmoid = onnx.helper.make_node(
'Sigmoid', inputs=outputs_prob, outputs=outputs_prob_sigmoid)
node_list.append(node_prob_sigmoid)
new_shape = [1, int(num_anchors), input_height, input_width, 1]
name_new_shape = [model_name + "@new_shape"]
node_new_shape = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_new_shape,
value=onnx.helper.make_tensor(
name=name_new_shape[0] + "@const",
data_type=onnx.TensorProto.INT64,
dims=[len(new_shape)],
vals=new_shape))
node_list.append(node_new_shape)
outputs_conf_new_shape = [model_name + "@_conf_new_shape"]
node_conf_new_shape = onnx.helper.make_node(
'Reshape',
inputs=outputs_conf_set_zero + name_new_shape,
outputs=outputs_conf_new_shape)
node_list.append(node_conf_new_shape)
outputs_score = [model_name + "@score"]
node_score = onnx.helper.make_node(
'Mul',
inputs=outputs_prob_sigmoid + outputs_conf_new_shape,
outputs=outputs_score)
node_list.append(node_score)
outputs_conf_bool = [model_name + "@conf_bool"]
node_conf_bool = onnx.helper.make_node(
'Greater',
inputs=outputs_conf_new_shape + name_zeros,
outputs=outputs_conf_bool)
node_list.append(node_conf_bool)
outputs_box_x_new_shape = [model_name + "@box_x_new_shape"]
node_box_x_new_shape = onnx.helper.make_node(
'Reshape',
inputs=outputs_box_x_encode + name_new_shape,
outputs=outputs_box_x_new_shape)
node_list.append(node_box_x_new_shape)
outputs_box_y_new_shape = [model_name + "@box_y_new_shape"]
node_box_y_new_shape = onnx.helper.make_node(
'Reshape',
inputs=outputs_box_y_encode + name_new_shape,
outputs=outputs_box_y_new_shape)
node_list.append(node_box_y_new_shape)
outputs_box_w_new_shape = [model_name + "@box_w_new_shape"]
node_box_w_new_shape = onnx.helper.make_node(
'Reshape',
inputs=outputs_box_w_encode + name_new_shape,
outputs=outputs_box_w_new_shape)
node_list.append(node_box_w_new_shape)
outputs_box_h_new_shape = [model_name + "@box_h_new_shape"]
node_box_h_new_shape = onnx.helper.make_node(
'Reshape',
inputs=outputs_box_h_encode + name_new_shape,
outputs=outputs_box_h_new_shape)
node_list.append(node_box_h_new_shape)
outputs_pred_box = [model_name + "@pred_box"]
node_pred_box = onnx.helper.make_node(
'Concat',
inputs=outputs_box_x_new_shape + outputs_box_y_new_shape + \
outputs_box_w_new_shape + outputs_box_h_new_shape,
outputs=outputs_pred_box,
axis=4)
node_list.append(node_pred_box)
outputs_conf_cast = [model_name + "conf_cast"]
node_conf_cast = onnx.helper.make_node(
'Cast', inputs=outputs_conf_bool, outputs=outputs_conf_cast, to=1)
node_list.append(node_conf_cast)
outputs_pred_box_mul_conf = [model_name + "@pred_box_mul_conf"]
node_pred_box_mul_conf = onnx.helper.make_node(
'Mul',
inputs=outputs_pred_box + outputs_conf_cast,
outputs=outputs_pred_box_mul_conf)
node_list.append(node_pred_box_mul_conf)
box_shape = [1, int(num_anchors) * input_height * input_width, 4]
name_box_shape = [model_name + "@box_shape"]
node_box_shape = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_box_shape,
value=onnx.helper.make_tensor(
name=name_box_shape[0] + "@const",
data_type=onnx.TensorProto.INT64,
dims=[len(box_shape)],
vals=box_shape))
node_list.append(node_box_shape)
outputs_pred_box_new_shape = [model_name + "@pred_box_new_shape"]
node_pred_box_new_shape = onnx.helper.make_node(
'Reshape',
inputs=outputs_pred_box_mul_conf + name_box_shape,
outputs=outputs_pred_box_new_shape)
node_list.append(node_pred_box_new_shape)
outputs_pred_box_x = [model_name + "@_pred_box_x"]
outputs_pred_box_y = [model_name + "@_pred_box_y"]
outputs_pred_box_w = [model_name + "@_pred_box_w"]
outputs_pred_box_h = [model_name + "@_pred_box_h"]
node_pred_box_split = onnx.helper.make_node(
'Split',
inputs=outputs_pred_box_new_shape,
outputs=outputs_pred_box_x + outputs_pred_box_y + outputs_pred_box_w +
outputs_pred_box_h,
axis=2)
node_list.append(node_pred_box_split)
name_number_two = [model_name + "@number_two"]
node_number_two = onnx.helper.make_node(
"Constant",
inputs=[],
outputs=name_number_two,
value=onnx.helper.make_tensor(
name=name_number_two[0] + "@const",
data_type=onnx.TensorProto.FLOAT,
dims=(),
vals=[2]))
node_list.append(node_number_two)
outputs_half_w = [model_name + "@half_w"]
node_half_w = onnx.helper.make_node(
"Div",
inputs=outputs_pred_box_w + name_number_two,
outputs=outputs_half_w)
node_list.append(node_half_w)
outputs_half_h = [model_name + "@half_h"]
node_half_h = onnx.helper.make_node(
"Div",
inputs=outputs_pred_box_h + name_number_two,
outputs=outputs_half_h)
node_list.append(node_half_h)
outputs_pred_box_x1 = [model_name + "@pred_box_x1"]
node_pred_box_x1 = onnx.helper.make_node(
'Sub',
inputs=outputs_pred_box_x + outputs_half_w,
outputs=outputs_pred_box_x1)
node_list.append(node_pred_box_x1)
outputs_pred_box_y1 = [model_name + "@pred_box_y1"]
node_pred_box_y1 = onnx.helper.make_node(
'Sub',
inputs=outputs_pred_box_y + outputs_half_h,
outputs=outputs_pred_box_y1)
node_list.append(node_pred_box_y1)
outputs_pred_box_x2 = [model_name + "@pred_box_x2"]
node_pred_box_x2 = onnx.helper.make_node(
'Add',
inputs=outputs_pred_box_x + outputs_half_w,
outputs=outputs_pred_box_x2)
node_list.append(node_pred_box_x2)
outputs_pred_box_y2 = [model_name + "@pred_box_y2"]
node_pred_box_y2 = onnx.helper.make_node(
'Add',
inputs=outputs_pred_box_y + outputs_half_h,
outputs=outputs_pred_box_y2)
node_list.append(node_pred_box_y2)
outputs_sqeeze_image_size = [model_name + "@sqeeze_image_size"]
node_sqeeze_image_size = onnx.helper.make_node(
"Squeeze",
axes=[0],
inputs=image_size,
outputs=outputs_sqeeze_image_size)
node_list.append(node_sqeeze_image_size)
output_img_height = [model_name + "@img_height"]
output_img_width = [model_name + "@img_width"]
node_image_size_split = onnx.helper.make_node(
"Split",
inputs=outputs_sqeeze_image_size,
outputs=output_img_height + output_img_width,
axis=-1,
split=[1, 1])
node_list.append(node_image_size_split)
output_img_width_cast = [model_name + "@img_width_cast"]
node_img_width_cast = onnx.helper.make_node(
'Cast', inputs=output_img_width, outputs=output_img_width_cast, to=1)
node_list.append(node_img_width_cast)
output_img_height_cast = [model_name + "@img_height_cast"]
node_img_height_cast = onnx.helper.make_node(
'Cast', inputs=output_img_height, outputs=output_img_height_cast, to=1)
node_list.append(node_img_height_cast)
outputs_pred_box_x1_decode = [model_name + "@pred_box_x1_decode"]
outputs_pred_box_y1_decode = [model_name + "@pred_box_y1_decode"]
outputs_pred_box_x2_decode = [model_name + "@pred_box_x2_decode"]
outputs_pred_box_y2_decode = [model_name + "@pred_box_y2_decode"]
node_pred_box_x1_decode = onnx.helper.make_node(
'Mul',
inputs=outputs_pred_box_x1 + output_img_width_cast,
outputs=outputs_pred_box_x1_decode)
node_list.append(node_pred_box_x1_decode)
node_pred_box_y1_decode = onnx.helper.make_node(
'Mul',
inputs=outputs_pred_box_y1 + output_img_height_cast,
outputs=outputs_pred_box_y1_decode)
node_list.append(node_pred_box_y1_decode)
node_pred_box_x2_decode = onnx.helper.make_node(
'Mul',
inputs=outputs_pred_box_x2 + output_img_width_cast,
outputs=outputs_pred_box_x2_decode)
node_list.append(node_pred_box_x2_decode)
node_pred_box_y2_decode = onnx.helper.make_node(
'Mul',
inputs=outputs_pred_box_y2 + output_img_height_cast,
outputs=outputs_pred_box_y2_decode)
node_list.append(node_pred_box_y2_decode)
name_number_one = [model_name + "@one"]
node_number_one = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=name_number_one,
value=onnx.helper.make_tensor(
name=name_number_one[0] + "@const",
data_type=onnx.TensorProto.FLOAT,
dims=(),
vals=[1]))
node_list.append(node_number_one)
output_new_img_height = [model_name + "@new_img_height"]
node_new_img_height = onnx.helper.make_node(
'Sub',
inputs=output_img_height_cast + name_number_one,
outputs=output_new_img_height)
node_list.append(node_new_img_height)
output_new_img_width = [model_name + "@new_img_width"]
node_new_img_width = onnx.helper.make_node(
'Sub',
inputs=output_img_width_cast + name_number_one,
outputs=output_new_img_width)
node_list.append(node_new_img_width)
outputs_pred_box_x2_sub_w = [model_name + "@pred_box_x2_sub_w"]
node_pred_box_x2_sub_w = onnx.helper.make_node(
'Sub',
inputs=outputs_pred_box_x2_decode + output_new_img_width,
outputs=outputs_pred_box_x2_sub_w)
node_list.append(node_pred_box_x2_sub_w)
outputs_pred_box_y2_sub_h = [model_name + "@pred_box_y2_sub_h"]
node_pred_box_y2_sub_h = onnx.helper.make_node(
'Sub',
inputs=outputs_pred_box_y2_decode + output_new_img_height,
outputs=outputs_pred_box_y2_sub_h)
node_list.append(node_pred_box_y2_sub_h)
outputs_pred_box_x1_clip = [model_name + "@pred_box_x1_clip"]
outputs_pred_box_y1_clip = [model_name + "@pred_box_y1_clip"]
outputs_pred_box_x2_clip = [model_name + "@pred_box_x2_clip"]
outputs_pred_box_y2_clip = [model_name + "@pred_box_y2_clip"]
node_pred_box_x1_clip = onnx.helper.make_node(
'Clip',
inputs=outputs_pred_box_x1_decode,
outputs=outputs_pred_box_x1_clip,
min=0.0,
max=float(np.inf))
node_list.append(node_pred_box_x1_clip)
node_pred_box_y1_clip = onnx.helper.make_node(
'Clip',
inputs=outputs_pred_box_y1_decode,
outputs=outputs_pred_box_y1_clip,
min=0.0,
max=float(np.inf))
node_list.append(node_pred_box_y1_clip)
node_pred_box_x2_clip = onnx.helper.make_node(
'Clip',
inputs=outputs_pred_box_x2_sub_w,
outputs=outputs_pred_box_x2_clip,
min=0.0,
max=float(np.inf))
node_list.append(node_pred_box_x2_clip)
node_pred_box_y2_clip = onnx.helper.make_node(
'Clip',
inputs=outputs_pred_box_y2_sub_h,
outputs=outputs_pred_box_y2_clip,
min=0.0,
max=float(np.inf))
node_list.append(node_pred_box_y2_clip)
outputs_pred_box_x2_res = [model_name + "@box_x2_res"]
node_pred_box_x2_res = onnx.helper.make_node(
'Sub',
inputs=outputs_pred_box_x2_decode + outputs_pred_box_x2_clip,
outputs=outputs_pred_box_x2_res)
node_list.append(node_pred_box_x2_res)
outputs_pred_box_y2_res = [model_name + "@box_y2_res"]
node_pred_box_y2_res = onnx.helper.make_node(
'Sub',
inputs=outputs_pred_box_y2_decode + outputs_pred_box_y2_clip,
outputs=outputs_pred_box_y2_res)
node_list.append(node_pred_box_y2_res)
node_pred_box_result = onnx.helper.make_node(
'Concat',
inputs=outputs_pred_box_x1_clip + outputs_pred_box_y1_clip +
outputs_pred_box_x2_res + outputs_pred_box_y2_res,
outputs=outputs['Boxes'],
axis=-1)
node_list.append(node_pred_box_result)
score_shape = [1, input_height * input_width * int(num_anchors), class_num]
name_score_shape = [model_name + "@score_shape"]
node_score_shape = onnx.helper.make_node(
"Constant",
inputs=[],
outputs=name_score_shape,
value=onnx.helper.make_tensor(
name=name_score_shape[0] + "@const",
data_type=onnx.TensorProto.INT64,
dims=[len(score_shape)],
vals=score_shape))
node_list.append(node_score_shape)
node_score_new_shape = onnx.helper.make_node(
'Reshape',
inputs=outputs_score + name_score_shape,
outputs=outputs['Scores'])
node_list.append(node_score_new_shape)
return node_list
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
import sys
import x2paddle
import os
import numpy as np
import paddle.fluid.core as core
import paddle.fluid as fluid
import onnx
from onnx import helper, onnx_pb
class PaddleOpMapper(object):
def __init__(self):
self.paddle_onnx_dtype_map = {
core.VarDesc.VarType.FP32: onnx_pb.TensorProto.FLOAT,
core.VarDesc.VarType.FP64: onnx_pb.TensorProto.DOUBLE,
core.VarDesc.VarType.INT32: onnx_pb.TensorProto.INT32,
core.VarDesc.VarType.INT16: onnx_pb.TensorProto.INT16,
core.VarDesc.VarType.INT16: onnx_pb.TensorProto.UINT16,
core.VarDesc.VarType.INT64: onnx_pb.TensorProto.INT64,
core.VarDesc.VarType.BOOL: onnx_pb.TensorProto.BOOL
}
self.name_counter = dict()
def convert(self, program, save_dir):
weight_nodes = self.convert_weights(program)
op_nodes = list()
input_nodes = list()
output_nodes = list()
unsupported_ops = set()
print("Translating PaddlePaddle to ONNX...\n")
for block in program.blocks:
for i, op in enumerate(block.ops):
sys.stdout.write(
"\rTotal:{}, Current:{} : {} ".format(
len(block.ops), i + 1, op.type))
sys.stdout.flush()
if not hasattr(self, op.type):
unsupported_ops.add(op.type)
continue
if len(unsupported_ops) > 0:
continue
node = getattr(self, op.type)(op, block)
if op.type == 'feed':
input_nodes.append(node)
elif op.type == 'fetch':
output_nodes.append(node)
else:
if isinstance(node, list):
op_nodes = op_nodes + node
else:
op_nodes.append(node)
if len(unsupported_ops) > 0:
print("\nThere's {} ops are not supported yet".format(
len(unsupported_ops)))
for op in unsupported_ops:
print("=========== {} ===========".format(op))
return
graph = helper.make_graph(
nodes=weight_nodes + op_nodes,
name='onnx_model_from_paddle',
initializer=[],
inputs=input_nodes,
outputs=output_nodes)
model = helper.make_model(graph, producer_name='X2Paddle')
onnx.checker.check_model(model)
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
with open(os.path.join(save_dir, 'x2paddle_model.onnx'), 'wb') as f:
f.write(model.SerializeToString())
print("\nTranslated model saved in {}".format(
os.path.join(save_dir, 'x2paddle_model.onnx')))
def get_name(self, op_name, var_name):
name = 'p2o.{}.{}'.format(op_name, var_name)
if name not in self.name_counter:
self.name_counter[name] = 0
else:
self.name_counter[name] += 1
return name + '.{}'.format(self.name_counter[name])
def convert_weights(self, program):
var_names = program.global_block().vars
nodes = list()
for name in var_names:
var = program.global_block().var(name)
if name.endswith('feed') or name.endswith('fetch'):
continue
if not var.persistable:
continue
weight = np.array(fluid.global_scope().find_var(name).get_tensor())
tensor = helper.make_tensor(
name=name,
dims=var.shape,
data_type=self.paddle_onnx_dtype_map[var.dtype],
vals=weight.flatten().tolist())
node = helper.make_node(
'Constant', inputs=[], outputs=[name], value=tensor)
nodes.append(node)
return nodes
def make_constant_node(self, name, dtype, value=None):
if isinstance(value, list):
dims = (len(value), )
elif value is None:
dims = ()
value = []
else:
dims = ()
value = [value]
tensor = helper.make_tensor(
name=name, data_type=dtype, dims=dims, vals=value)
node = helper.make_node(
'Constant', inputs=[], outputs=[name], value=tensor)
return node
def conv2d(self, op, block):
kernel_shape = block.var(op.input('Filter')[0]).shape
node = helper.make_node(
'Conv',
inputs=op.input('Input') + op.input('Filter'),
outputs=op.output('Output'),
dilations=op.attr('dilations'),
kernel_shape=kernel_shape[-2:],
strides=op.attr('strides'),
group=op.attr('groups'),
pads=op.attr('paddings') + op.attr('paddings'))
return node
def conv2d_transpose(self, op, block):
kernel_shape = block.var(op.input('Filter')[0]).shape
node = helper.make_node(
'ConvTranspose',
inputs=op.input('Input') + op.input('Filter'),
outputs=op.output('Output'),
dilations=op.attr('dilations'),
kernel_shape=kernel_shape[-2:],
strides=op.attr('strides'),
group=1,
pads=op.attr('paddings') + op.attr('paddings'))
return node
def relu(self, op, block):
node = helper.make_node(
'Relu', inputs=op.input('X'), outputs=op.output('Out'))
return node
def sigmoid(self, op, block):
node = helper.make_node(
'Sigmoid', inputs=op.input('X'), outputs=op.output('Out'))
return node
def exp(self, op, block):
node = helper.make_node(
'Exp', inputs=op.input('X'), outputs=op.output('Out'))
return node
def leaky_relu(self, op, block):
node = helper.make_node(
'LeakyRelu',
inputs=op.input('X'),
outputs=op.output('Out'),
alpha=op.attr('alpha'))
return node
def elementwise_add(self, op, block):
axis = op.attr('axis')
x_shape = block.var(op.input('X')[0]).shape
y_shape = block.var(op.input('Y')[0]).shape
if len(y_shape) == 1 and axis == 1:
shape_name = self.get_name(op.type, 'shape')
shape_value = [1] * len(x_shape)
shape_value[axis] = y_shape[0]
shape_node = self.make_constant_node(
shape_name, onnx_pb.TensorProto.INT64, shape_value)
temp_value = self.get_name(op.type, 'temp')
y_node = helper.make_node(
'Reshape',
inputs=[op.input('Y')[0], shape_name],
outputs=[temp_value])
node = helper.make_node(
'Add',
inputs=[op.input('X')[0], temp_value],
outputs=op.output('Out'))
return [shape_node, y_node, node]
elif len(x_shape) == len(y_shape):
node = helper.make_node(
'Add',
inputs=[op.input('X')[0], op.input('Y')[0]],
outputs=op.output('Out'))
return node
else:
raise Excpetion("Unexpected situation happend in elementwise_add")
def elementwise_sub(self, op, block):
axis = op.attr('axis')
x_shape = block.var(op.input('X')[0]).shape
y_shape = block.var(op.input('Y')[0]).shape
if len(y_shape) == 1 and axis == 1:
shape_name = self.get_name(op.type, 'shape')
shape_value = [1] * len(x_shape)
shape_value[axis] = y_shape[0]
shape_node = self.make_constant_node(
shape_name, onnx_pb.TensorProto.INT64, shape_value)
temp_value = self.get_name(op.type, 'temp')
y_node = helper.make_node(
'Reshape',
inputs=[op.input('Y')[0], shape_name],
outputs=[temp_value])
node = helper.make_node(
'Sub',
inputs=[op.input('X')[0], temp_value],
outputs=op.output('Out'))
return [shape_node, y_node, node]
elif len(x_shape) == len(y_shape):
node = helper.make_node(
'Sub',
inputs=[op.input('X')[0], op.input('Y')[0]],
outputs=op.output('Out'))
return node
else:
raise Excpetion("Unexpected situation happend in elementwise_sub")
def pool2d(self, op, block):
pool_type = {
'max': ('MaxPool', 'GlobalMaxPool'),
'avg': ('AveragePool', 'GlobalAveragePool')
}
if op.attr('global_pooling'):
node = helper.make_node(
pool_type[op.attr('pooling_type')][1],
inputs=op.input('X'),
outputs=op.output('Out'), )
else:
input_shape = block.var(op.input('X')[0]).shape
k_size = op.attr('ksize')
paddings = op.attr('paddings')
if input_shape[2] > 0 and input_shape[2] + paddings[0] < k_size[0]:
k_size[0] = input_shape[2] + paddings[0]
if input_shape[3] > 0 and input_shape[3] + paddings[1] < k_size[1]:
k_size[1] = input_shape[3] + paddings[1]
node = helper.make_node(
pool_type[op.attr('pooling_type')][0],
inputs=op.input('X'),
outputs=op.output('Out'),
kernel_shape=k_size,
strides=op.attr('strides'),
pads=op.attr('paddings') + op.attr('paddings'))
return node
def softmax(self, op, block):
axis = op.attr('axis')
shape = block.var(op.output('Out')[0]).shape
if axis < 0:
axis += len(shape)
if axis == len(shape) - 1:
node = helper.make_node(
'Softmax',
inputs=op.input('X'),
outputs=op.output('Out'),
axis=op.attr('axis'))
return node
else:
perm = [i for i in range(len(shape))]
perm[-1] = axis
perm[axis] = len(shape) - 1
transpose_name0 = self.get_name(op.type, 'transpose')
transpose_node0 = helper.make_node(
'Transpose',
inputs=op.input('X'),
outputs=[transpose_name0],
perm=perm)
softmax_name = self.get_name(op.type, 'softmax')
softmax_node = helper.make_node(
'Softmax',
inputs=[transpose_name0],
outputs=[softmax_name],
axis=-1)
transpose_name1 = self.get_name(op.type, 'transpose')
transpose_node1 = helper.make_node(
'Transpose',
inputs=[softmax_name],
outputs=op.output('Out'),
perm=perm)
return [transpose_node0, softmax_node, transpose_node1]
def scale(self, op, block):
scale = op.attr('scale')
bias = op.attr('bias')
if math.fabs(scale - 1.0) < 1e-06 and math.fabs(bias - 0.0) < 1e-06:
node = helper.make_node(
'Identity', inputs=op.input('X'), outputs=op.output('Out'))
return node
else:
scale_name = self.get_name(op.type, 'scale')
bias_name = self.get_name(op.type, 'bias')
scale_node = self.make_constant_node(
scale_name, onnx_pb.TensorProto.FLOAT, scale)
bias_node = self.make_constant_node(bias_name,
onnx_pb.TensorProto.FLOAT, bias)
temp_tensor_name = self.get_name(op.type, 'temporary')
if op.attr('bias_after_scale'):
node1 = helper.make_node(
'Mul',
inputs=[scale_name, op.input('X')[0]],
outputs=[temp_tensor_name])
node2 = helper.make_node(
'Add',
inputs=[bias_name, temp_tensor_name],
outputs=op.output('Out'))
else:
node1 = helper.make_node(
'Add',
inputs=[bias_name, op.input('X')[0]],
outputs=temp_tensor_name)
node2 = helper.make_node(
'Mul',
inputs=[scale_name, temp_tensor_name],
outputs=[op.output('Out')])
return [scale_node, bias_node, node1, node2]
def mul(self, op, block):
x_shape = block.var(op.input('X')[0]).shape
y_shape = block.var(op.input('Y')[0]).shape
out_shape = list(block.var(op.output('Out')[0]).shape)
x_num_col_dims = op.attr('x_num_col_dims')
y_num_col_dims = op.attr('y_num_col_dims')
flatten_x_name = 'flatten_{}'.format(op.input('X')[0])
flatten_y_name = 'flatten_{}'.format(op.input('Y')[0])
shape_name = 'temp_shape_{}'.format(op.output('Out')[0])
temp_out_name = 'temp_{}'.format(op.output('Out')[0])
flatten_x = helper.make_node(
'Flatten',
inputs=op.input('X'),
outputs=[flatten_x_name],
axis=x_num_col_dims)
flatten_y = helper.make_node(
'Flatten',
inputs=op.input('Y'),
outputs=[flatten_y_name],
axis=y_num_col_dims)
shape_node = self.make_constant_node(
shape_name, onnx_pb.TensorProto.INT64, out_shape)
node = helper.make_node(
'MatMul',
inputs=[flatten_x_name, flatten_y_name],
outputs=[temp_out_name])
reshape_out = helper.make_node(
'Reshape',
inputs=[temp_out_name, shape_name],
outputs=op.output('Out'))
return [flatten_x, flatten_y, shape_node, node, reshape_out]
def batch_norm(self, op, block):
kwargs = {
'epsilon': op.attr('epsilon'),
'momentum': op.attr('momentum')
}
inputs = op.input('X') + op.input('Scale') + op.input(
'Bias') + op.input('Mean') + op.input('Variance')
node = helper.make_node(
'BatchNormalization',
inputs=inputs,
outputs=op.output('Y'),
**kwargs)
return node
def concat(self, op, block):
node = helper.make_node(
'Concat',
inputs=op.input('X'),
outputs=op.output('Out'),
axis=op.attr('axis'))
return node
def depthwise_conv2d(self, op, block):
return self.conv2d(op, block)
def relu6(self, op, block):
min_name = self.get_name(op.type, 'min')
max_name = self.get_name(op.type, 'max')
min_node = self.make_constant_node(min_name, onnx_pb.TensorProto.FLOAT,
0)
max_node = self.make_constant_node(max_name, onnx_pb.TensorProto.FLOAT,
op.attr('threshold'))
node = helper.make_node(
'Clip',
inputs=[op.input('X')[0], min_name, max_name],
outputs=op.output('Out'), )
return [min_node, max_node, node]
def shape(self, op, block):
node = helper.make_node(
'Shape', inputs=op.input('Input'), outputs=op.output('Out'))
return node
def split(self, op, block):
sections = op.attr('sections')
if len(sections) > 0:
node = helper.make_node(
'Split',
inputs=op.input('X'),
outputs=op.output('Out'),
axis=op.attr('axis'),
split=sections)
else:
node = helper.make_node(
'Split',
inputs=op.input('X'),
outputs=op.output('Out'),
axis=op.attr('axis'))
return node
def slice(self, op, block):
axes = op.attr('axes')
starts = op.attr('starts')
ends = op.attr('ends')
axes_name = self.get_name(op.type, 'axes')
starts_name = self.get_name(op.type, 'starts')
ends_name = self.get_name(op.type, 'ends')
axes_node = self.make_constant_node(axes_name,
onnx_pb.TensorProto.INT64, axes)
starts_node = self.make_constant_node(starts_name,
onnx_pb.TensorProto.INT64, starts)
ends_node = self.make_constant_node(ends_name,
onnx_pb.TensorProto.INT64, ends)
node = helper.make_node(
"Slice",
inputs=[op.input('Input')[0], starts_name, ends_name, axes_name],
outputs=op.output('Out'), )
return [starts_node, ends_node, axes_node, node]
def fill_constant(self, op, block):
value = op.attr('value')
dtype = op.attr('dtype')
shape = op.attr('shape')
value = np.ones(shape) * value
if dtype == 2:
value = value.astype('int32')
node = helper.make_node(
'Constant',
inputs=[],
outputs=op.output('Out'),
value=helper.make_tensor(
name=op.output('Out')[0],
data_type=self.paddle_onnx_dtype_map[dtype],
dims=shape,
vals=value.tolist()))
return node
def transpose2(self, op, block):
node = helper.make_node(
'Transpose',
inputs=op.input('X'),
outputs=op.output('Out'),
perm=op.attr('axis'))
return node
def reshape2(self, op, block):
input_names = op.input_names
if len(op.input('ShapeTensor')) > 1:
cast_shape_nodes = list()
cast_shape_names = list()
for i in range(len(op.input('ShapeTensor'))):
dim = op.input('ShapeTensor')[i]
temp_name = self.get_name(op.type, 'shape.cast')
node = helper.make_node(
'Cast',
inputs=[dim],
outputs=[temp_name],
to=onnx_pb.TensorProto.INT64)
cast_shape_nodes.append(node)
cast_shape_names.append(temp_name)
temp_name = self.get_name(op.type, 'shape.concat')
shape_node = helper.make_node(
'Concat', inputs=cast_shape_names, outputs=[temp_name], axis=-1)
node = helper.make_node(
'Reshape',
inputs=[op.input('X')[0], temp_name],
outputs=op.output('Out'))
return cast_shape_nodes + [shape_node, node]
else:
temp_name = self.get_name(op.type, 'shape.cast')
cast_shape_node = helper.make_node(
'Cast',
inputs=op.input('ShapeTensor'),
outputs=[temp_name],
to=onnx_pb.TensorProto.INT64)
node = helper.make_node(
'Reshape',
inputs=[op.input('X')[0], temp_name],
outputs=op.output('Out'))
return [cast_shape_node, node]
def dropout(self, op, block):
dropout_mode = op.attr('dropout_implementation')
dropout_prob = op.attr('dropout_prob')
if dropout_mode == 'upscale_in_train':
node = helper.make_node(
'Identity', inputs=op.input('X'), outputs=op.output('Out'))
return node
elif dropout_mode == 'downgrade_in_infer':
scale_name = self.get_name(op.type, 'scale')
scale_node = self.make_constant_node(
scale_name, onnx_pb.TensorProto.FLOAT, 1 - dropout_prob)
node = helper.make_node(
"Mul",
inputs=[op.input('X')[0], scale_name],
outputs=op.output('Out'))
return [scale_node, node]
else:
raise Exception("Unexpected situation happend")
def reduce_mean(self, op, block):
node = helper.make_node(
'ReduceMean',
inputs=op.input('X'),
outputs=op.output('Out'),
axes=op.attr('dim'),
keepdims=op.attr('keep_dim'))
return node
def bilinear_interp(self, op, block):
input_names = op.input_names
coordinate_transformation_mode = 'half_pixel'
if op.attr('align_corners'):
coordinate_transformation_mode = 'align_corners'
if ('OutSize' in input_names and len(op.input('OutSize')) > 0) or (
'SizeTensor' in input_names and
len(op.input('SizeTensor')) > 0):
node_list = list()
roi_node = self.make_constant_node(
self.get_name(op.type, 'roi'), onnx_pb.TensorProto.FLOAT,
[1, 1, 1, 1, 1, 1, 1, 1])
roi_name = self.get_name(op.type, 'roi')
roi_node = self.make_constant_node(
roi_name, onnx_pb.TensorProto.FLOAT, [1, 1, 1, 1, 1, 1, 1, 1])
empty_name = self.get_name(op.type, 'empty')
empty_tensor = helper.make_tensor(
empty_name,
onnx_pb.TensorProto.FLOAT, (0, ),
np.array([]).astype('float32'),
raw=False)
empty_node = helper.make_node(
'Constant', [], outputs=[empty_name], value=empty_tensor)
shape_name0 = self.get_name(op.type, 'shape')
shape_node0 = helper.make_node(
'Shape', inputs=op.input('X'), outputs=[shape_name0])
starts_name = self.get_name(op.type, 'slice.starts')
starts_node = self.make_constant_node(
starts_name, onnx_pb.TensorProto.INT64, [0])
ends_name = self.get_name(op.type, 'slice.ends')
ends_node = self.make_constant_node(ends_name,
onnx_pb.TensorProto.INT64, [2])
shape_name1 = self.get_name(op.type, 'shape')
shape_node1 = helper.make_node(
'Slice',
inputs=[shape_name0, starts_name, ends_name],
outputs=[shape_name1])
node_list.extend([
roi_node, empty_node, shape_node0, starts_node, ends_node,
shape_node1
])
# shape_name2 = self.get_name(op.type, "shape.cast")
# shape_node2 = helper.make_node(
# 'Cast',
# inputs=op.input('OutSize'),
# outputs=[shape_name2],
# to=onnx_pb.TensorProto.INT64)
if 'OutSize' in input_names and len(op.input('OutSize')) > 0:
cast_shape_name = self.get_name(op.type, "shape.cast")
cast_shape_node = helper.make_node(
'Cast',
inputs=op.input('OutSize'),
outputs=[cast_shape_name],
to=onnx_pb.TensorProto.INT64)
node_list.append(cast_shape_node)
else:
concat_shape_name = self.get_name(op.type, "shape.concat")
concat_shape_node = helper.make_node(
"Concat",
inputs=op.input('SizeTensor'),
outputs=[concat_shape_name],
axis=0)
cast_shape_name = self.get_name(op.type, "shape.cast")
cast_shape_node = helper.make_node(
'Cast',
inputs=[concat_shape_name],
outputs=[cast_shape_name],
to=onnx_pb.TensorProto.INT64)
node_list.extend([concat_shape_node, cast_shape_node])
shape_name3 = self.get_name(op.type, "shape.concat")
shape_node3 = helper.make_node(
'Concat',
inputs=[shape_name1, cast_shape_name],
outputs=[shape_name3],
axis=0)
result_node = helper.make_node(
'Resize',
inputs=[op.input('X')[0], roi_name, empty_name, shape_name3],
outputs=op.output('Out'),
mode='linear',
coordinate_transformation_mode=coordinate_transformation_mode)
node_list.extend([shape_node3, result_node])
return node_list
elif 'Scale' in input_names and len(op.input('Scale')) > 0:
node = helper.make_node(
'Resize',
inputs=[op.input('X')[0], op.input('Scale')[0]],
outputs=op.output('Out'),
mode='linear',
coordinate_transformation_mode=coordinate_transformation_mode)
else:
out_shape = [op.attr('out_h'), op.attr('out_w')]
scale = op.attr('scale')
if out_shape.count(-1) > 0:
scale_name = self.get_name(op.type, 'scale')
scale_node = self.make_constant_node(scale_name,
onnx_pb.TensorProto.FLOAT,
[1, 1, scale, scale])
roi_name = self.get_name(op.type, 'roi')
roi_node = self.make_constant_node(roi_name,
onnx_pb.TensorProto.FLOAT,
[1, 1, 1, 1, 1, 1, 1, 1])
node = helper.make_node(
'Resize',
inputs=[op.input('X')[0], roi_name, scale_name],
outputs=op.output('Out'),
mode='nearest',
coordinate_transformation_mode=coordinate_transformation_mode
)
return [scale_node, roi_node, node]
else:
raise Exception("Unexpected situation happend")
return node
def nearest_interp(self, op, block):
input_names = op.input_names
coordinate_transformation_mode = 'half_pixel'
if op.attr('align_corners'):
coordinate_transformation_mode = 'align_corners'
if 'OutSize' in input_names and len(op.input('OutSize')) > 0:
node = helper.make_node(
'Resize',
inputs=[op.input('X')[0], '', op.input('OutSize')[0]],
outputs=op.output('Out'),
mode='nearest',
coordinate_transformation_mode=coordinate_transformation_mode)
elif 'Scale' in input_names and len(op.input('Scale')) > 0:
node = helper.make_node(
'Resize',
inputs=[op.input('X')[0], op.input('Scale')[0]],
outputs=op.output('Out'),
mode='nearest',
coordinate_transformation_mode=coordinate_transformation_mode)
else:
out_shape = [op.attr('out_h'), op.attr('out_w')]
scale = op.attr('scale')
if out_shape.count(-1) > 0:
scale_name = self.get_name(op.type, 'scale')
scale_node = self.make_constant_node(scale_name,
onnx_pb.TensorProto.FLOAT,
[1, 1, scale, scale])
roi_name = self.get_name(op.type, 'roi')
roi_node = self.make_constant_node(roi_name,
onnx_pb.TensorProto.FLOAT,
[1, 1, 1, 1, 1, 1, 1, 1])
node = helper.make_node(
'Resize',
inputs=[op.input('X')[0], roi_name, scale_name],
outputs=op.output('Out'),
mode='nearest',
coordinate_transformation_mode=coordinate_transformation_mode
)
return [scale_node, roi_node, node]
else:
raise Exception("Unexpected situation happend")
return node
def hard_sigmoid(self, op, block):
slope = op.attr('slope')
offset = op.attr('offset')
node = helper.make_node(
'HardSigmoid',
inputs=op.input('X'),
outputs=op.output('Out'),
alpha=slope,
beta=offset)
return node
def hard_swish(self, op, block):
min_name = self.get_name(op.type, 'min')
max_name = self.get_name(op.type, 'max')
scale_name = self.get_name(op.type, 'scale')
offset_name = self.get_name(op.type, 'offset')
min_node = self.make_constant_node(min_name, onnx_pb.TensorProto.FLOAT,
0)
max_node = self.make_constant_node(max_name, onnx_pb.TensorProto.FLOAT,
op.attr('threshold'))
scale_node = self.make_constant_node(scale_name,
onnx_pb.TensorProto.FLOAT,
op.attr('scale'))
offset_node = self.make_constant_node(offset_name,
onnx_pb.TensorProto.FLOAT,
op.attr('offset'))
name0 = self.get_name(op.type, 'add')
node0 = helper.make_node(
'Add', inputs=[op.input('X')[0], offset_name], outputs=[name0])
name1 = self.get_name(op.type, 'relu')
node1 = helper.make_node(
'Clip',
inputs=[name0, min_name, max_name],
outputs=[name1], )
name2 = self.get_name(op.type, 'mul')
node2 = helper.make_node(
'Mul', inputs=[op.input('X')[0], name1], outputs=[name2])
node3 = helper.make_node(
'Div', inputs=[name2, scale_name], outputs=op.output('Out'))
return [
min_node, max_node, scale_node, offset_node, node0, node1, node2,
node3
]
def elementwise_mul(self, op, block):
axis = op.attr('axis')
x_shape = block.var(op.input('X')[0]).shape
y_shape = block.var(op.input('Y')[0]).shape
if len(y_shape) == 1 and axis == 1:
shape_name = self.get_name(op.type, 'shape')
shape_value = [1] * len(x_shape)
shape_value[axis] = y_shape[0]
shape_node = self.make_constant_node(
shape_name, onnx_pb.TensorProto.INT64, shape_value)
temp_value = self.get_name(op.type, 'temp')
y_node = helper.make_node(
'Reshape',
inputs=[op.input('Y')[0], shape_name],
outputs=[temp_value])
node = helper.make_node(
'Mul',
inputs=[op.input('X')[0], temp_value],
outputs=op.output('Out'))
return [shape_node, y_node, node]
elif len(x_shape) == len(y_shape):
node = helper.make_node(
'Mul',
inputs=[op.input('X')[0], op.input('Y')[0]],
outputs=op.output('Out'))
return node
else:
raise Excpetion("Unexpected situation happend in elementwise_add")
return node
def feed(self, op, block):
name = op.output('Out')[0]
var = block.var(name)
tensor_info = helper.make_tensor_value_info(
name=name,
shape=var.shape,
elem_type=self.paddle_onnx_dtype_map[var.dtype])
return tensor_info
def fetch(self, op, block):
name = op.input('X')[0]
var = block.var(name)
tensor_info = helper.make_tensor_value_info(
name=name,
shape=var.shape,
elem_type=self.paddle_onnx_dtype_map[var.dtype])
return tensor_info
def unsqueeze2(self, op, block):
node = helper.make_node(
'Unsqueeze',
inputs=op.input('X'),
outputs=op.output('Out'),
axes=op.attr('axes'))
return node
def arg_max(self, op, block):
node = helper.make_node(
'ArgMax',
inputs=op.input('X'),
outputs=op.output('Out'),
axis=op.attr('axis'),
keepdims=0)
return node
def reciprocal(self, op, block):
inputs = op.input(op.input_names[0])
outputs = op.output(op.output_names[0])
node = helper.make_node('Reciprocal', inputs=inputs, outputs=outputs)
return node
def im2sequence(self, op, block):
from .paddle_custom_layer.im2sequence import im2sequence
return im2sequence(op, block)
...@@ -85,7 +85,8 @@ class TFOpMapper(OpMapper): ...@@ -85,7 +85,8 @@ class TFOpMapper(OpMapper):
not_placeholder = list() not_placeholder = list()
for name in self.graph.input_nodes: for name in self.graph.input_nodes:
if self.graph.get_node(name).layer_type != "Placeholder" and self.graph.get_node(name).layer_type != "OneShotIterator": if self.graph.get_node(name).layer_type != "Placeholder" \
and self.graph.get_node(name).layer_type != "OneShotIterator":
not_placeholder.append(name) not_placeholder.append(name)
for name in not_placeholder: for name in not_placeholder:
idx = self.graph.input_nodes.index(name) idx = self.graph.input_nodes.index(name)
...@@ -113,9 +114,8 @@ class TFOpMapper(OpMapper): ...@@ -113,9 +114,8 @@ class TFOpMapper(OpMapper):
else: else:
unsupported_ops.add(op) unsupported_ops.add(op)
if len(unsupported_ops) > 0: if len(unsupported_ops) > 0:
sys.stderr.write( sys.stderr.write("=========={} Ops are not supported yet======\n".
"=========={} Ops are not supported yet======\n".format( format(len(unsupported_ops)))
len(unsupported_ops)))
for op in unsupported_ops: for op in unsupported_ops:
sys.stderr.write("========== {} ==========\n".format(op)) sys.stderr.write("========== {} ==========\n".format(op))
sys.exit(-1) sys.exit(-1)
...@@ -140,10 +140,8 @@ class TFOpMapper(OpMapper): ...@@ -140,10 +140,8 @@ class TFOpMapper(OpMapper):
pd_param_name = list(param.values())[0] pd_param_name = list(param.values())[0]
tf_param = node.get_attr(tf_param_name) tf_param = node.get_attr(tf_param_name)
attr[pd_param_name] = tf_param attr[pd_param_name] = tf_param
node.fluid_code.add_layer(op_info[0], node.fluid_code.add_layer(
inputs=input, op_info[0], inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def elementwise_map(self, node): def elementwise_map(self, node):
assert node.layer_type in self.elementwise_ops assert node.layer_type in self.elementwise_ops
...@@ -178,21 +176,21 @@ class TFOpMapper(OpMapper): ...@@ -178,21 +176,21 @@ class TFOpMapper(OpMapper):
0] == y_shape[-1] and y_shape.count(-1) < 1: 0] == y_shape[-1] and y_shape.count(-1) < 1:
shape = [1, x_shape[0], 1, 1] shape = [1, x_shape[0], 1, 1]
attr = {"shape": shape} attr = {"shape": shape}
node.fluid_code.add_layer("reshape", node.fluid_code.add_layer(
inputs=x_input, "reshape",
output="reshape_x", inputs=x_input,
param_attr=attr) output="reshape_x",
param_attr=attr)
if y_shape[0] != 1: if y_shape[0] != 1:
attr = {"expand_times": [y_shape[0], 1, 1, 1]} attr = {"expand_times": [y_shape[0], 1, 1, 1]}
node.fluid_code.add_layer("expand", node.fluid_code.add_layer(
inputs="reshape_x", "expand",
output="reshape_x", inputs="reshape_x",
param_attr=attr) output="reshape_x",
param_attr=attr)
inputs = {"x": "reshape_x", "y": y_input} inputs = {"x": "reshape_x", "y": y_input}
node.fluid_code.add_layer(op_type, node.fluid_code.add_layer(
inputs=inputs, op_type, inputs=inputs, output=node, param_attr=None)
output=node,
param_attr=None)
return return
else: else:
raise Exception("Unexpected situation happend") raise Exception("Unexpected situation happend")
...@@ -204,10 +202,8 @@ class TFOpMapper(OpMapper): ...@@ -204,10 +202,8 @@ class TFOpMapper(OpMapper):
axis = -1 axis = -1
attr = {"axis": axis} attr = {"axis": axis}
inputs = {"x": x_input, "y": y_input} inputs = {"x": x_input, "y": y_input}
node.fluid_code.add_layer(op_type, node.fluid_code.add_layer(
inputs=inputs, op_type, inputs=inputs, output=node, param_attr=attr)
output=node,
param_attr=attr)
return return
is_sub_seq = True is_sub_seq = True
...@@ -241,10 +237,8 @@ class TFOpMapper(OpMapper): ...@@ -241,10 +237,8 @@ class TFOpMapper(OpMapper):
if len(x_expand_times) == 4 and x.tf_data_format == "NHWC": if len(x_expand_times) == 4 and x.tf_data_format == "NHWC":
x_expand_times = [x_expand_times[i] for i in [0, 3, 1, 2]] x_expand_times = [x_expand_times[i] for i in [0, 3, 1, 2]]
attr = {"expand_times": x_expand_times} attr = {"expand_times": x_expand_times}
node.fluid_code.add_layer("expand", node.fluid_code.add_layer(
inputs=x_input, "expand", inputs=x_input, output="x_tmp", param_attr=attr)
output="x_tmp",
param_attr=attr)
x_input = "x_tmp" x_input = "x_tmp"
if y_need_expand: if y_need_expand:
if len(y_expand_times) == 3 and y.tf_data_format == "NHWC": if len(y_expand_times) == 3 and y.tf_data_format == "NHWC":
...@@ -252,16 +246,12 @@ class TFOpMapper(OpMapper): ...@@ -252,16 +246,12 @@ class TFOpMapper(OpMapper):
if len(y_expand_times) == 4 and y.tf_data_format == "NHWC": if len(y_expand_times) == 4 and y.tf_data_format == "NHWC":
y_expand_times = [y_expand_times[i] for i in [0, 3, 1, 2]] y_expand_times = [y_expand_times[i] for i in [0, 3, 1, 2]]
attr = {"expand_times": y_expand_times} attr = {"expand_times": y_expand_times}
node.fluid_code.add_layer("expand", node.fluid_code.add_layer(
inputs=y_input, "expand", inputs=y_input, output="y_tmp", param_attr=attr)
output="y_tmp",
param_attr=attr)
y_input = "y_tmp" y_input = "y_tmp"
inputs = {"x": x_input, "y": y_input} inputs = {"x": x_input, "y": y_input}
node.fluid_code.add_layer(op_type, node.fluid_code.add_layer(
inputs=inputs, op_type, inputs=inputs, output=node, param_attr=None)
output=node,
param_attr=None)
def Placeholder(self, node): def Placeholder(self, node):
shape = node.out_shapes[0] shape = node.out_shapes[0]
...@@ -282,10 +272,8 @@ class TFOpMapper(OpMapper): ...@@ -282,10 +272,8 @@ class TFOpMapper(OpMapper):
if shape[0] < 0: if shape[0] < 0:
self.batch_node = node self.batch_node = node
node.fluid_code.add_layer("data", node.fluid_code.add_layer(
inputs=None, "data", inputs=None, output=node, param_attr=attr)
output=node,
param_attr=attr)
def OneShotIterator(self, node): def OneShotIterator(self, node):
return self.Placeholder(node) return self.Placeholder(node)
...@@ -307,8 +295,8 @@ class TFOpMapper(OpMapper): ...@@ -307,8 +295,8 @@ class TFOpMapper(OpMapper):
shape = [shape[i] for i in [0, 3, 1, 2]] shape = [shape[i] for i in [0, 3, 1, 2]]
if len(shape) == 3: if len(shape) == 3:
shape = [shape[i] for i in [2, 0, 1]] shape = [shape[i] for i in [2, 0, 1]]
self.weights[node.layer_name] = numpy.transpose( self.weights[node.layer_name] = numpy.transpose(node.value,
node.value, (2, 0, 1)) (2, 0, 1))
elif node.tf_data_format == "NCHW": elif node.tf_data_format == "NCHW":
if len(shape) == 4: if len(shape) == 4:
self.graph.data_format_propagation(node) self.graph.data_format_propagation(node)
...@@ -319,10 +307,8 @@ class TFOpMapper(OpMapper): ...@@ -319,10 +307,8 @@ class TFOpMapper(OpMapper):
'name': string(node.layer_name), 'name': string(node.layer_name),
'default_initializer': initializer 'default_initializer': initializer
} }
node.fluid_code.add_layer("create_parameter", node.fluid_code.add_layer(
inputs=None, "create_parameter", inputs=None, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Transpose(self, node): def Transpose(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -361,16 +347,12 @@ class TFOpMapper(OpMapper): ...@@ -361,16 +347,12 @@ class TFOpMapper(OpMapper):
node.tf_data_format = [tf_data_format[i] for i in perm] node.tf_data_format = [tf_data_format[i] for i in perm]
node.pd_data_format = [pd_data_format[i] for i in perm] node.pd_data_format = [pd_data_format[i] for i in perm]
attr = {'perm': new_perm} attr = {'perm': new_perm}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=input, "transpose", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
elif len(node.out_shapes[0]) != 4: elif len(node.out_shapes[0]) != 4:
attr = {'perm': perm} attr = {'perm': perm}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=input, "transpose", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
else: else:
raise Exception("Unexpected situation happend in Transpose OP") raise Exception("Unexpected situation happend in Transpose OP")
...@@ -400,10 +382,8 @@ class TFOpMapper(OpMapper): ...@@ -400,10 +382,8 @@ class TFOpMapper(OpMapper):
"pool_padding": string(pad_mode), "pool_padding": string(pad_mode),
"pool_stride": strides[2:4] "pool_stride": strides[2:4]
} }
node.fluid_code.add_layer("pool2d", node.fluid_code.add_layer(
inputs=input, "pool2d", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Conv2D(self, node): def Conv2D(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -443,10 +423,8 @@ class TFOpMapper(OpMapper): ...@@ -443,10 +423,8 @@ class TFOpMapper(OpMapper):
"dilation": dilations[2:4], "dilation": dilations[2:4],
"padding": string(pad_mode) "padding": string(pad_mode)
} }
node.fluid_code.add_layer("conv2d", node.fluid_code.add_layer(
inputs=input, "conv2d", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def BiasAdd(self, node): def BiasAdd(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -456,10 +434,8 @@ class TFOpMapper(OpMapper): ...@@ -456,10 +434,8 @@ class TFOpMapper(OpMapper):
axis = 1 axis = 1
inputs = {"x": input, "y": bias} inputs = {"x": input, "y": bias}
attr = {"axis": axis} attr = {"axis": axis}
node.fluid_code.add_layer("elementwise_add", node.fluid_code.add_layer(
inputs=inputs, "elementwise_add", inputs=inputs, output=node, param_attr=attr)
output=node,
param_attr=attr)
def FusedBatchNorm(self, node): def FusedBatchNorm(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -490,10 +466,8 @@ class TFOpMapper(OpMapper): ...@@ -490,10 +466,8 @@ class TFOpMapper(OpMapper):
"is_test": True "is_test": True
} }
node.fluid_code.add_layer("batch_norm", node.fluid_code.add_layer(
inputs=input, "batch_norm", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def FusedBatchNormV3(self, node): def FusedBatchNormV3(self, node):
return self.FusedBatchNorm(node) return self.FusedBatchNorm(node)
...@@ -538,10 +512,8 @@ class TFOpMapper(OpMapper): ...@@ -538,10 +512,8 @@ class TFOpMapper(OpMapper):
"use_cudnn": False, "use_cudnn": False,
"padding": string(pad_mode) "padding": string(pad_mode)
} }
node.fluid_code.add_layer("conv2d", node.fluid_code.add_layer(
inputs=input, "conv2d", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Reshape(self, node): def Reshape(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -561,18 +533,17 @@ class TFOpMapper(OpMapper): ...@@ -561,18 +533,17 @@ class TFOpMapper(OpMapper):
attr = {"shape": shape} attr = {"shape": shape}
self.add_omit_nodes(param.layer_name, node.layer_name) self.add_omit_nodes(param.layer_name, node.layer_name)
else: else:
assert len(param.out_shapes[0] assert len(param.out_shapes[
) == 1, "Unexpected situation of shape parameter" 0]) == 1, "Unexpected situation of shape parameter"
attr = {"shape": [-1]} attr = {"shape": [-1]}
node.fluid_code.add_layer("reshape", node.fluid_code.add_layer(
inputs=param, "reshape",
output="shape_param", inputs=param,
param_attr=attr) output="shape_param",
param_attr=attr)
attr = {"num_or_sections": param.out_shapes[0][0], "dim": 0} attr = {"num_or_sections": param.out_shapes[0][0], "dim": 0}
node.fluid_code.add_layer("split", node.fluid_code.add_layer(
inputs="shape_param", "split", inputs="shape_param", output=node, param_attr=attr)
output=node,
param_attr=attr)
new_param = "[" new_param = "["
for i in range(param.out_shapes[0][0]): for i in range(param.out_shapes[0][0]):
new_param += (node.layer_name + "[{}]".format(i) + ", ") new_param += (node.layer_name + "[{}]".format(i) + ", ")
...@@ -600,14 +571,10 @@ class TFOpMapper(OpMapper): ...@@ -600,14 +571,10 @@ class TFOpMapper(OpMapper):
if len(input.out_shapes[0]) == 4 and node.tf_data_format == "NHWC": if len(input.out_shapes[0]) == 4 and node.tf_data_format == "NHWC":
if len(attr["shape"]) < 3: if len(attr["shape"]) < 3:
perm = {"perm": [0, 2, 3, 1]} perm = {"perm": [0, 2, 3, 1]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=input, "transpose", inputs=input, output=node, param_attr=perm)
output=node, node.fluid_code.add_layer(
param_attr=perm) "reshape", inputs=node, output=node, param_attr=attr)
node.fluid_code.add_layer("reshape",
inputs=node,
output=node,
param_attr=attr)
return return
if len(attr["shape"]) == 4 and node.tf_data_format == "NHWC": if len(attr["shape"]) == 4 and node.tf_data_format == "NHWC":
...@@ -616,27 +583,19 @@ class TFOpMapper(OpMapper): ...@@ -616,27 +583,19 @@ class TFOpMapper(OpMapper):
attr["shape"] = [attr["shape"][i] for i in [0, 3, 1, 2]] attr["shape"] = [attr["shape"][i] for i in [0, 3, 1, 2]]
else: else:
perm = {"perm": [0, 2, 3, 1]} perm = {"perm": [0, 2, 3, 1]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=input, "transpose", inputs=input, output=node, param_attr=perm)
output=node, node.fluid_code.add_layer(
param_attr=perm) "reshape", inputs=node, output=node, param_attr=attr)
node.fluid_code.add_layer("reshape",
inputs=node,
output=node,
param_attr=attr)
perm = {"perm": [0, 3, 1, 2]} perm = {"perm": [0, 3, 1, 2]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=node, "transpose", inputs=node, output=node, param_attr=perm)
output=node,
param_attr=perm)
return return
if len(attr["shape"]) == 5: if len(attr["shape"]) == 5:
attr["shape"] = [attr["shape"][i] for i in [0, 1, 4, 2, 3]] attr["shape"] = [attr["shape"][i] for i in [0, 1, 4, 2, 3]]
node.fluid_code.add_layer("reshape", node.fluid_code.add_layer(
inputs=input, "reshape", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def AvgPool(self, node): def AvgPool(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -664,10 +623,8 @@ class TFOpMapper(OpMapper): ...@@ -664,10 +623,8 @@ class TFOpMapper(OpMapper):
"pool_stride": strides[2:4], "pool_stride": strides[2:4],
"pool_padding": string(pad_mode) "pool_padding": string(pad_mode)
} }
node.fluid_code.add_layer("pool2d", node.fluid_code.add_layer(
inputs=input, "pool2d", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def SplitV(self, node): def SplitV(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -684,28 +641,24 @@ class TFOpMapper(OpMapper): ...@@ -684,28 +641,24 @@ class TFOpMapper(OpMapper):
"num_or_sections": num_sections.value.tolist(), "num_or_sections": num_sections.value.tolist(),
"dim": dim.value "dim": dim.value
} }
node.fluid_code.add_layer("split", node.fluid_code.add_layer(
inputs=input, "split", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def ConcatV2(self, node): def ConcatV2(self, node):
inputs = [ inputs = [
self.graph.get_node(name, copy=True) self.graph.get_node(
for name in node.layer.input[:-1] name, copy=True) for name in node.layer.input[:-1]
] ]
axis = self.graph.get_node(node.layer.input[-1], copy=True) axis = self.graph.get_node(node.layer.input[-1], copy=True)
assert axis.layer_type == "Const" assert axis.layer_type == "Const"
self.add_omit_nodes(axis.layer_name, node.layer_name) self.add_omit_nodes(axis.layer_name, node.layer_name)
axis = axis.value axis = axis.value
if inputs[0].tf_data_format == "NHWC" and len( if inputs[0].tf_data_format == "NHWC" and len(inputs[0].out_shapes[
inputs[0].out_shapes[0]) == 4: 0]) == 4:
axis = nhwc_dim_to_nchw(inputs[0], axis) axis = nhwc_dim_to_nchw(inputs[0], axis)
attr = {"axis": axis} attr = {"axis": axis}
node.fluid_code.add_layer("concat", node.fluid_code.add_layer(
inputs=inputs, "concat", inputs=inputs, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Tile(self, node): def Tile(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -725,18 +678,17 @@ class TFOpMapper(OpMapper): ...@@ -725,18 +678,17 @@ class TFOpMapper(OpMapper):
expand_times[i] = 1 expand_times[i] = 1
attr = {"expand_times": expand_times} attr = {"expand_times": expand_times}
node.fluid_code.add_layer("expand", node.fluid_code.add_layer(
inputs=input, "expand", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Pack(self, node): def Pack(self, node):
inputs = [ inputs = [
self.graph.get_node(name, copy=True) for name in node.layer.input self.graph.get_node(
name, copy=True) for name in node.layer.input
] ]
axis = node.get_attr("axis") axis = node.get_attr("axis")
if inputs[0].tf_data_format == "NHWC" and len( if inputs[0].tf_data_format == "NHWC" and len(inputs[0].out_shapes[
inputs[0].out_shapes[0]) == 4: 0]) == 4:
tf_data_format = list(inputs[0].tf_data_format) tf_data_format = list(inputs[0].tf_data_format)
tf_data_format.insert(axis, str(len(tf_data_format))) tf_data_format.insert(axis, str(len(tf_data_format)))
axis = nhwc_dim_to_nchw(inputs[0], axis) axis = nhwc_dim_to_nchw(inputs[0], axis)
...@@ -746,10 +698,8 @@ class TFOpMapper(OpMapper): ...@@ -746,10 +698,8 @@ class TFOpMapper(OpMapper):
node.pd_data_format = "".join(pd_data_format) node.pd_data_format = "".join(pd_data_format)
attr = {"axis": axis} attr = {"axis": axis}
node.fluid_code.add_layer("stack", node.fluid_code.add_layer(
inputs=inputs, "stack", inputs=inputs, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Pad(self, node): def Pad(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -766,10 +716,8 @@ class TFOpMapper(OpMapper): ...@@ -766,10 +716,8 @@ class TFOpMapper(OpMapper):
paddings = paddings[4:] paddings = paddings[4:]
pad_op = "pad2d" pad_op = "pad2d"
attr = {"paddings": paddings} attr = {"paddings": paddings}
node.fluid_code.add_layer(pad_op, node.fluid_code.add_layer(
inputs=input, pad_op, inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def MirrorPad(self, node): def MirrorPad(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -788,10 +736,8 @@ class TFOpMapper(OpMapper): ...@@ -788,10 +736,8 @@ class TFOpMapper(OpMapper):
paddings = paddings[4:] paddings = paddings[4:]
pad_op = "pad2d" pad_op = "pad2d"
attr = {"paddings": paddings, "mode": string("reflect")} attr = {"paddings": paddings, "mode": string("reflect")}
node.fluid_code.add_layer(pad_op, node.fluid_code.add_layer(
inputs=input, pad_op, inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Range(self, node): def Range(self, node):
start = self.graph.get_node(node.layer.input[0], copy=True) start = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -815,10 +761,8 @@ class TFOpMapper(OpMapper): ...@@ -815,10 +761,8 @@ class TFOpMapper(OpMapper):
inputs = {"start": start, "end": limit, "step": delta} inputs = {"start": start, "end": limit, "step": delta}
attr = {"dtype": string(node.dtype)} attr = {"dtype": string(node.dtype)}
node.fluid_code.add_layer("range", node.fluid_code.add_layer(
inputs=inputs, "range", inputs=inputs, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Mean(self, node): def Mean(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -832,10 +776,8 @@ class TFOpMapper(OpMapper): ...@@ -832,10 +776,8 @@ class TFOpMapper(OpMapper):
dims[i] = nhwc_dim_to_nchw(input, dims[i]) dims[i] = nhwc_dim_to_nchw(input, dims[i])
attr = {"dim": dims, "keep_dim": keep_dims} attr = {"dim": dims, "keep_dim": keep_dims}
node.fluid_code.add_layer("reduce_mean", node.fluid_code.add_layer(
inputs=input, "reduce_mean", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def MatMul(self, node): def MatMul(self, node):
x = self.graph.get_node(node.layer.input[0], copy=True) x = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -849,15 +791,11 @@ class TFOpMapper(OpMapper): ...@@ -849,15 +791,11 @@ class TFOpMapper(OpMapper):
shape = x.out_shapes[0] shape = x.out_shapes[0]
shape[-1] = y.out_shapes[0][0] shape[-1] = y.out_shapes[0][0]
attr = {"shape": shape} attr = {"shape": shape}
node.fluid_code.add_layer("reshape", node.fluid_code.add_layer(
inputs=x, "reshape", inputs=x, output=x, param_attr=attr)
output=x,
param_attr=attr)
attr = {"transpose_x": transpose_a, "transpose_y": transpose_b} attr = {"transpose_x": transpose_a, "transpose_y": transpose_b}
node.fluid_code.add_layer("matmul", node.fluid_code.add_layer(
inputs=inputs, "matmul", inputs=inputs, output=node, param_attr=attr)
output=node,
param_attr=attr)
def ArgMax(self, node): def ArgMax(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -868,10 +806,8 @@ class TFOpMapper(OpMapper): ...@@ -868,10 +806,8 @@ class TFOpMapper(OpMapper):
if input.tf_data_format == "NHWC" and len(input.out_shapes[0]) == 4: if input.tf_data_format == "NHWC" and len(input.out_shapes[0]) == 4:
axis = nhwc_dim_to_nchw(input, axis) axis = nhwc_dim_to_nchw(input, axis)
attr = {"axis": axis} attr = {"axis": axis}
node.fluid_code.add_layer("argmax", node.fluid_code.add_layer(
inputs=input, "argmax", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def StridedSlice(self, node): def StridedSlice(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -909,16 +845,12 @@ class TFOpMapper(OpMapper): ...@@ -909,16 +845,12 @@ class TFOpMapper(OpMapper):
x = shrink_axis_mask >> i & 1 x = shrink_axis_mask >> i & 1
if x == 1: if x == 1:
squeeze_dims.append(i) squeeze_dims.append(i)
node.fluid_code.add_layer("slice", node.fluid_code.add_layer(
inputs=input, "slice", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
if shrink_axis_mask > 0 and len(input.out_shapes[0]) == 5: if shrink_axis_mask > 0 and len(input.out_shapes[0]) == 5:
attr = {"axes": squeeze_dims} attr = {"axes": squeeze_dims}
node.fluid_code.add_layer("squeeze", node.fluid_code.add_layer(
inputs=node, "squeeze", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Slice(self, node): def Slice(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -950,10 +882,8 @@ class TFOpMapper(OpMapper): ...@@ -950,10 +882,8 @@ class TFOpMapper(OpMapper):
"starts": begin, "starts": begin,
"ends": size "ends": size
} }
node.fluid_code.add_layer("slice", node.fluid_code.add_layer(
inputs=input, "slice", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Conv2DBackpropInput(self, node): def Conv2DBackpropInput(self, node):
out_shape = self.graph.get_node(node.layer.input[0], copy=True) out_shape = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -1003,10 +933,8 @@ class TFOpMapper(OpMapper): ...@@ -1003,10 +933,8 @@ class TFOpMapper(OpMapper):
"padding": string(pad_mode), "padding": string(pad_mode),
"output_size": out_shape[1:3] "output_size": out_shape[1:3]
} }
node.fluid_code.add_layer("conv2d_transpose", node.fluid_code.add_layer(
inputs=input, "conv2d_transpose", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Max(self, node): def Max(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -1018,10 +946,8 @@ class TFOpMapper(OpMapper): ...@@ -1018,10 +946,8 @@ class TFOpMapper(OpMapper):
dim = nhwc_dim_to_nchw(input, dim) dim = nhwc_dim_to_nchw(input, dim)
attr = {"dim": dim, "keep_dim": keep_dims} attr = {"dim": dim, "keep_dim": keep_dims}
node.fluid_code.add_layer("reduce_max", node.fluid_code.add_layer(
inputs=input, "reduce_max", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Sum(self, node): def Sum(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -1033,19 +959,15 @@ class TFOpMapper(OpMapper): ...@@ -1033,19 +959,15 @@ class TFOpMapper(OpMapper):
dim = nhwc_dim_to_nchw(input, dim) dim = nhwc_dim_to_nchw(input, dim)
attr = {"dim": dim, "keep_dim": keep_dims} attr = {"dim": dim, "keep_dim": keep_dims}
node.fluid_code.add_layer("reduce_sum", node.fluid_code.add_layer(
inputs=input, "reduce_sum", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Cast(self, node): def Cast(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
dtype = node.dtype_map[node.get_attr('DstT')] dtype = node.dtype_map[node.get_attr('DstT')]
attr = {"dtype": string(dtype)} attr = {"dtype": string(dtype)}
node.fluid_code.add_layer("cast", node.fluid_code.add_layer(
inputs=input, "cast", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Split(self, node): def Split(self, node):
dim = self.graph.get_node(node.layer.input[0], copy=True) dim = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -1057,10 +979,8 @@ class TFOpMapper(OpMapper): ...@@ -1057,10 +979,8 @@ class TFOpMapper(OpMapper):
dim = nhwc_dim_to_nchw(input, dim) dim = nhwc_dim_to_nchw(input, dim)
attr = {"num_or_sections": num_split, "dim": dim} attr = {"num_or_sections": num_split, "dim": dim}
node.fluid_code.add_layer("split", node.fluid_code.add_layer(
inputs=input, "split", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Squeeze(self, node): def Squeeze(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -1069,10 +989,8 @@ class TFOpMapper(OpMapper): ...@@ -1069,10 +989,8 @@ class TFOpMapper(OpMapper):
for i in range(len(squeeze_dims)): for i in range(len(squeeze_dims)):
squeeze_dims[i] = nhwc_dim_to_nchw(input, squeeze_dims[i]) squeeze_dims[i] = nhwc_dim_to_nchw(input, squeeze_dims[i])
attr = {"axes": squeeze_dims} attr = {"axes": squeeze_dims}
node.fluid_code.add_layer("squeeze", node.fluid_code.add_layer(
inputs=input, "squeeze", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Softmax(self, node): def Softmax(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -1082,10 +1000,8 @@ class TFOpMapper(OpMapper): ...@@ -1082,10 +1000,8 @@ class TFOpMapper(OpMapper):
if input.tf_data_format == "NHWC" and len(input.out_shapes[0]) == 4: if input.tf_data_format == "NHWC" and len(input.out_shapes[0]) == 4:
axis = nhwc_dim_to_nchw(input, axis) axis = nhwc_dim_to_nchw(input, axis)
attr = {"axis": axis} attr = {"axis": axis}
node.fluid_code.add_layer("softmax", node.fluid_code.add_layer(
inputs=input, "softmax", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def ResizeNearestNeighbor(self, node): def ResizeNearestNeighbor(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -1094,14 +1010,12 @@ class TFOpMapper(OpMapper): ...@@ -1094,14 +1010,12 @@ class TFOpMapper(OpMapper):
if resize_shape.layer_type == "Const": if resize_shape.layer_type == "Const":
resize_shape = resize_shape.value.tolist() resize_shape = resize_shape.value.tolist()
else: else:
resize_shape = self.decoder.infer_shape_tensor( resize_shape = self.decoder.infer_shape_tensor(resize_shape,
resize_shape, node.out_shapes[0]) node.out_shapes[0])
align_corners = node.get_attr("align_corners") align_corners = node.get_attr("align_corners")
attr = {"align_corners": align_corners, "out_shape": resize_shape} attr = {"align_corners": align_corners, "out_shape": resize_shape}
node.fluid_code.add_layer("resize_nearest", node.fluid_code.add_layer(
inputs=input, "resize_nearest", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def ResizeBilinear(self, node): def ResizeBilinear(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -1110,27 +1024,23 @@ class TFOpMapper(OpMapper): ...@@ -1110,27 +1024,23 @@ class TFOpMapper(OpMapper):
if resize_shape.layer_type == "Const": if resize_shape.layer_type == "Const":
resize_shape = resize_shape.value.tolist() resize_shape = resize_shape.value.tolist()
else: else:
resize_shape = self.decoder.infer_shape_tensor( resize_shape = self.decoder.infer_shape_tensor(resize_shape,
resize_shape, node.out_shapes[0]) node.out_shapes[0])
align_corners = node.get_attr("align_corners") align_corners = node.get_attr("align_corners")
attr = { attr = {
"align_corners": align_corners, "align_corners": align_corners,
"out_shape": resize_shape, "out_shape": resize_shape,
"align_mode": 1 "align_mode": 1
} }
node.fluid_code.add_layer("resize_bilinear", node.fluid_code.add_layer(
inputs=input, "resize_bilinear", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def GreaterEqual(self, node): def GreaterEqual(self, node):
x = self.graph.get_node(node.layer.input[0], copy=True) x = self.graph.get_node(node.layer.input[0], copy=True)
y = self.graph.get_node(node.layer.input[1], copy=True) y = self.graph.get_node(node.layer.input[1], copy=True)
inputs = {"x": x, "y": y} inputs = {"x": x, "y": y}
node.fluid_code.add_layer("greater_equal", node.fluid_code.add_layer(
inputs=inputs, "greater_equal", inputs=inputs, output=node, param_attr=None)
output=node,
param_attr=None)
def RandomUniform(self, node): def RandomUniform(self, node):
shape = self.graph.get_node(node.layer.input[0], copy=True) shape = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -1144,26 +1054,21 @@ class TFOpMapper(OpMapper): ...@@ -1144,26 +1054,21 @@ class TFOpMapper(OpMapper):
attr = {"shape": shape, "min": 0.0, "max": 0.9999} attr = {"shape": shape, "min": 0.0, "max": 0.9999}
if shape[0] < 0: if shape[0] < 0:
input = self.batch_node input = self.batch_node
node.fluid_code.add_layer("uniform_random_batch_size_like", node.fluid_code.add_layer(
inputs=input, "uniform_random_batch_size_like",
output=node, inputs=input,
param_attr=attr) output=node,
param_attr=attr)
else: else:
node.fluid_code.add_layer("uniform_random", node.fluid_code.add_layer(
inputs=None, "uniform_random", inputs=None, output=node, param_attr=attr)
output=node,
param_attr=attr)
def SquaredDifference(self, node): def SquaredDifference(self, node):
x = self.graph.get_node(node.layer.input[0], copy=True) x = self.graph.get_node(node.layer.input[0], copy=True)
y = self.graph.get_node(node.layer.input[1], copy=True) y = self.graph.get_node(node.layer.input[1], copy=True)
inputs = {"x": x, "y": y} inputs = {"x": x, "y": y}
node.fluid_code.add_layer("elementwise_sub", node.fluid_code.add_layer(
inputs=inputs, "elementwise_sub", inputs=inputs, output=node, param_attr=None)
output=node,
param_attr=None)
inputs = {"x": node, "y": node} inputs = {"x": node, "y": node}
node.fluid_code.add_layer("elementwise_mul", node.fluid_code.add_layer(
inputs=inputs, "elementwise_mul", inputs=inputs, output=node, param_attr=None)
output=node,
param_attr=None)
...@@ -43,6 +43,7 @@ class TFOpMapperNHWC(OpMapper): ...@@ -43,6 +43,7 @@ class TFOpMapperNHWC(OpMapper):
'Sqrt': ['sqrt'], 'Sqrt': ['sqrt'],
'swish_f32': ['swish'], 'swish_f32': ['swish'],
'Tanh': ['tanh'], 'Tanh': ['tanh'],
'Softplus': ['softplus'],
'LeakyRelu': ['leaky_relu', { 'LeakyRelu': ['leaky_relu', {
'alpha': 'alpha' 'alpha': 'alpha'
}] }]
...@@ -128,26 +129,18 @@ class TFOpMapperNHWC(OpMapper): ...@@ -128,26 +129,18 @@ class TFOpMapperNHWC(OpMapper):
if len(input.out_shapes[0]) == 4 and op_info[0] != 'shape': if len(input.out_shapes[0]) == 4 and op_info[0] != 'shape':
attr1 = {"perm": [0, 3, 1, 2]} attr1 = {"perm": [0, 3, 1, 2]}
node.fluid_code.add_layer('transpose', node.fluid_code.add_layer(
inputs=input, 'transpose', inputs=input, output=node, param_attr=attr1)
output=node,
param_attr=attr1)
input = node input = node
node.fluid_code.add_layer(op_info[0], node.fluid_code.add_layer(
inputs=input, op_info[0], inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
input = node input = node
attr2 = {"perm": [0, 2, 3, 1]} attr2 = {"perm": [0, 2, 3, 1]}
node.fluid_code.add_layer('transpose', node.fluid_code.add_layer(
inputs=input, 'transpose', inputs=input, output=node, param_attr=attr2)
output=node,
param_attr=attr2)
else: else:
node.fluid_code.add_layer(op_info[0], node.fluid_code.add_layer(
inputs=input, op_info[0], inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def elementwise_map(self, node): def elementwise_map(self, node):
assert node.layer_type in self.elementwise_ops assert node.layer_type in self.elementwise_ops
...@@ -208,42 +201,37 @@ class TFOpMapperNHWC(OpMapper): ...@@ -208,42 +201,37 @@ class TFOpMapperNHWC(OpMapper):
raise Exception("Unexpected situation happend") raise Exception("Unexpected situation happend")
if x_need_expand: if x_need_expand:
attr = {"expand_times": x_expand_times} attr = {"expand_times": x_expand_times}
node.fluid_code.add_layer("expand", node.fluid_code.add_layer(
inputs=x_input, "expand", inputs=x_input, output="x_tmp", param_attr=attr)
output="x_tmp",
param_attr=attr)
x_input = "x_tmp" x_input = "x_tmp"
if y_need_expand: if y_need_expand:
attr = {"expand_times": y_expand_times} attr = {"expand_times": y_expand_times}
node.fluid_code.add_layer("expand", node.fluid_code.add_layer(
inputs=y_input, "expand", inputs=y_input, output="y_tmp", param_attr=attr)
output="y_tmp",
param_attr=attr)
y_input = "y_tmp" y_input = "y_tmp"
if len(x_shape) == 4 and len(y_shape) == 4: if len(x_shape) == 4 and len(y_shape) == 4:
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=x_input, "transpose",
output=x_input, inputs=x_input,
param_attr={'perm': [0, 3, 1, 2]}) output=x_input,
node.fluid_code.add_layer("transpose", param_attr={'perm': [0, 3, 1, 2]})
inputs=y_input, node.fluid_code.add_layer(
output=y_input, "transpose",
param_attr={'perm': [0, 3, 1, 2]}) inputs=y_input,
output=y_input,
param_attr={'perm': [0, 3, 1, 2]})
inputs = {"x": x_input, "y": y_input} inputs = {"x": x_input, "y": y_input}
node.fluid_code.add_layer(op_type, node.fluid_code.add_layer(
inputs=inputs, op_type, inputs=inputs, output=node, param_attr=None)
output=node, node.fluid_code.add_layer(
param_attr=None) "transpose",
node.fluid_code.add_layer("transpose", inputs=node,
inputs=node, output=node,
output=node, param_attr={'perm': [0, 2, 3, 1]})
param_attr={'perm': [0, 2, 3, 1]})
else: else:
inputs = {"x": x_input, "y": y_input} inputs = {"x": x_input, "y": y_input}
node.fluid_code.add_layer(op_type, node.fluid_code.add_layer(
inputs=inputs, op_type, inputs=inputs, output=node, param_attr=None)
output=node,
param_attr=None)
def Placeholder(self, node): def Placeholder(self, node):
shape = node.out_shapes[0] shape = node.out_shapes[0]
...@@ -259,10 +247,8 @@ class TFOpMapperNHWC(OpMapper): ...@@ -259,10 +247,8 @@ class TFOpMapperNHWC(OpMapper):
'append_batch_size': False 'append_batch_size': False
} }
node.fluid_code.add_layer("data", node.fluid_code.add_layer(
inputs=None, "data", inputs=None, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Const(self, node): def Const(self, node):
shape = node.out_shapes[0] shape = node.out_shapes[0]
...@@ -282,10 +268,8 @@ class TFOpMapperNHWC(OpMapper): ...@@ -282,10 +268,8 @@ class TFOpMapperNHWC(OpMapper):
'name': string(node.layer_name), 'name': string(node.layer_name),
'default_initializer': initializer 'default_initializer': initializer
} }
node.fluid_code.add_layer("create_parameter", node.fluid_code.add_layer(
inputs=None, "create_parameter", inputs=None, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Transpose(self, node): def Transpose(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -296,10 +280,8 @@ class TFOpMapperNHWC(OpMapper): ...@@ -296,10 +280,8 @@ class TFOpMapperNHWC(OpMapper):
perm = perm.value.tolist() perm = perm.value.tolist()
attr = {'perm': perm} attr = {'perm': perm}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=input, "transpose", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def MaxPool(self, node): def MaxPool(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -316,10 +298,8 @@ class TFOpMapperNHWC(OpMapper): ...@@ -316,10 +298,8 @@ class TFOpMapperNHWC(OpMapper):
if not channel_first: if not channel_first:
attr = {"perm": [0, 3, 1, 2]} attr = {"perm": [0, 3, 1, 2]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=input, "transpose", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
in_shape = [in_shape[i] for i in [0, 3, 1, 2]] in_shape = [in_shape[i] for i in [0, 3, 1, 2]]
strides = [strides[i] for i in [0, 3, 1, 2]] strides = [strides[i] for i in [0, 3, 1, 2]]
k_size = [k_size[i] for i in [0, 3, 1, 2]] k_size = [k_size[i] for i in [0, 3, 1, 2]]
...@@ -331,17 +311,13 @@ class TFOpMapperNHWC(OpMapper): ...@@ -331,17 +311,13 @@ class TFOpMapperNHWC(OpMapper):
"pool_stride": strides[2:4], "pool_stride": strides[2:4],
"pool_padding": string(pad_mode) "pool_padding": string(pad_mode)
} }
node.fluid_code.add_layer("pool2d", node.fluid_code.add_layer(
inputs=input, "pool2d", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
if not channel_first: if not channel_first:
attr = {"perm": [0, 2, 3, 1]} attr = {"perm": [0, 2, 3, 1]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=node, "transpose", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Conv2D(self, node): def Conv2D(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -373,10 +349,8 @@ class TFOpMapperNHWC(OpMapper): ...@@ -373,10 +349,8 @@ class TFOpMapperNHWC(OpMapper):
strides = [strides[i] for i in [0, 3, 1, 2]] strides = [strides[i] for i in [0, 3, 1, 2]]
dilations = [dilations[i] for i in [0, 3, 1, 2]] dilations = [dilations[i] for i in [0, 3, 1, 2]]
attr = {"perm": [0, 3, 1, 2]} attr = {"perm": [0, 3, 1, 2]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=input, "transpose", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
input = node input = node
attr = { attr = {
...@@ -393,25 +367,19 @@ class TFOpMapperNHWC(OpMapper): ...@@ -393,25 +367,19 @@ class TFOpMapperNHWC(OpMapper):
if len(node.dilation) == 1: if len(node.dilation) == 1:
attr['dilation'] = [1, node.dilation[0]] attr['dilation'] = [1, node.dilation[0]]
node.fluid_code.add_layer("conv2d", node.fluid_code.add_layer(
inputs=input, "conv2d", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
if not channel_first: if not channel_first:
attr = {"perm": [0, 2, 3, 1]} attr = {"perm": [0, 2, 3, 1]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=node, "transpose", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
def BiasAdd(self, node): def BiasAdd(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
bias = self.graph.get_node(node.layer.input[1], copy=True) bias = self.graph.get_node(node.layer.input[1], copy=True)
inputs = {"x": input, "y": bias} inputs = {"x": input, "y": bias}
node.fluid_code.add_layer("elementwise_add", node.fluid_code.add_layer(
inputs=inputs, "elementwise_add", inputs=inputs, output=node, param_attr=None)
output=node,
param_attr=None)
def FusedBatchNorm(self, node): def FusedBatchNorm(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -433,10 +401,8 @@ class TFOpMapperNHWC(OpMapper): ...@@ -433,10 +401,8 @@ class TFOpMapperNHWC(OpMapper):
if not channel_first: if not channel_first:
attr = {"perm": [0, 3, 1, 2]} attr = {"perm": [0, 3, 1, 2]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=input, "transpose", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
input = node input = node
attr = { attr = {
...@@ -448,17 +414,13 @@ class TFOpMapperNHWC(OpMapper): ...@@ -448,17 +414,13 @@ class TFOpMapperNHWC(OpMapper):
"is_test": True "is_test": True
} }
node.fluid_code.add_layer("batch_norm", node.fluid_code.add_layer(
inputs=input, "batch_norm", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
if not channel_first: if not channel_first:
attr = {"perm": [0, 2, 3, 1]} attr = {"perm": [0, 2, 3, 1]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=node, "transpose", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
def DepthwiseConv2dNative(self, node): def DepthwiseConv2dNative(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -487,10 +449,8 @@ class TFOpMapperNHWC(OpMapper): ...@@ -487,10 +449,8 @@ class TFOpMapperNHWC(OpMapper):
strides = [strides[i] for i in [0, 3, 1, 2]] strides = [strides[i] for i in [0, 3, 1, 2]]
dilations = [dilations[i] for i in [0, 3, 1, 2]] dilations = [dilations[i] for i in [0, 3, 1, 2]]
attr = {"perm": [0, 3, 1, 2]} attr = {"perm": [0, 3, 1, 2]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=input, "transpose", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
input = node input = node
attr = { attr = {
...@@ -504,17 +464,13 @@ class TFOpMapperNHWC(OpMapper): ...@@ -504,17 +464,13 @@ class TFOpMapperNHWC(OpMapper):
"use_cudnn": False, "use_cudnn": False,
"padding": string(pad_mode) "padding": string(pad_mode)
} }
node.fluid_code.add_layer("conv2d", node.fluid_code.add_layer(
inputs=input, "conv2d", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
if not channel_first: if not channel_first:
attr = {"perm": [0, 2, 3, 1]} attr = {"perm": [0, 2, 3, 1]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=node, "transpose", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Reshape(self, node): def Reshape(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -530,18 +486,17 @@ class TFOpMapperNHWC(OpMapper): ...@@ -530,18 +486,17 @@ class TFOpMapperNHWC(OpMapper):
attr = {"shape": shape} attr = {"shape": shape}
self.add_omit_nodes(param.layer_name, node.layer_name) self.add_omit_nodes(param.layer_name, node.layer_name)
else: else:
assert len(param.out_shapes[0] assert len(param.out_shapes[
) == 1, "Unexpected situation of shape parameter" 0]) == 1, "Unexpected situation of shape parameter"
attr = {"shape": [-1]} attr = {"shape": [-1]}
node.fluid_code.add_layer("reshape", node.fluid_code.add_layer(
inputs=param, "reshape",
output="shape_param", inputs=param,
param_attr=attr) output="shape_param",
param_attr=attr)
attr = {"num_or_sections": param.out_shapes[0][0], "dim": 0} attr = {"num_or_sections": param.out_shapes[0][0], "dim": 0}
node.fluid_code.add_layer("split", node.fluid_code.add_layer(
inputs="shape_param", "split", inputs="shape_param", output=node, param_attr=attr)
output=node,
param_attr=attr)
new_param = "[" new_param = "["
for i in range(param.out_shapes[0][0]): for i in range(param.out_shapes[0][0]):
new_param += (node.layer_name + "[{}]".format(i) + ", ") new_param += (node.layer_name + "[{}]".format(i) + ", ")
...@@ -565,10 +520,8 @@ class TFOpMapperNHWC(OpMapper): ...@@ -565,10 +520,8 @@ class TFOpMapperNHWC(OpMapper):
attr["shape"][index] = int(total_size) attr["shape"][index] = int(total_size)
attr["shape"][0] = -1 attr["shape"][0] = -1
node.fluid_code.add_layer("reshape", node.fluid_code.add_layer(
inputs=input, "reshape", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def AvgPool(self, node): def AvgPool(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -588,10 +541,8 @@ class TFOpMapperNHWC(OpMapper): ...@@ -588,10 +541,8 @@ class TFOpMapperNHWC(OpMapper):
strides = [strides[i] for i in [0, 3, 1, 2]] strides = [strides[i] for i in [0, 3, 1, 2]]
k_size = [k_size[i] for i in [0, 3, 1, 2]] k_size = [k_size[i] for i in [0, 3, 1, 2]]
attr = {"perm": [0, 3, 1, 2]} attr = {"perm": [0, 3, 1, 2]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=input, "transpose", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
input = node input = node
attr = { attr = {
...@@ -600,17 +551,13 @@ class TFOpMapperNHWC(OpMapper): ...@@ -600,17 +551,13 @@ class TFOpMapperNHWC(OpMapper):
"pool_stride": strides[2:4], "pool_stride": strides[2:4],
"pool_padding": string(pad_mode) "pool_padding": string(pad_mode)
} }
node.fluid_code.add_layer("pool2d", node.fluid_code.add_layer(
inputs=input, "pool2d", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
if not channel_first: if not channel_first:
attr = {"perm": [0, 2, 3, 1]} attr = {"perm": [0, 2, 3, 1]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=node, "transpose", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
def SplitV(self, node): def SplitV(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -625,15 +572,13 @@ class TFOpMapperNHWC(OpMapper): ...@@ -625,15 +572,13 @@ class TFOpMapperNHWC(OpMapper):
"num_or_sections": num_sections.value.tolist(), "num_or_sections": num_sections.value.tolist(),
"dim": dim.value "dim": dim.value
} }
node.fluid_code.add_layer("split", node.fluid_code.add_layer(
inputs=input, "split", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def ConcatV2(self, node): def ConcatV2(self, node):
inputs = [ inputs = [
self.graph.get_node(name, copy=True) self.graph.get_node(
for name in node.layer.input[:-1] name, copy=True) for name in node.layer.input[:-1]
] ]
axis = self.graph.get_node(node.layer.input[-1], copy=True) axis = self.graph.get_node(node.layer.input[-1], copy=True)
assert axis.layer_type == "Const" assert axis.layer_type == "Const"
...@@ -643,10 +588,8 @@ class TFOpMapperNHWC(OpMapper): ...@@ -643,10 +588,8 @@ class TFOpMapperNHWC(OpMapper):
axis += len(inputs[0].out_shapes[0]) axis += len(inputs[0].out_shapes[0])
attr = {"axis": axis} attr = {"axis": axis}
node.fluid_code.add_layer("concat", node.fluid_code.add_layer(
inputs=inputs, "concat", inputs=inputs, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Tile(self, node): def Tile(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -660,21 +603,18 @@ class TFOpMapperNHWC(OpMapper): ...@@ -660,21 +603,18 @@ class TFOpMapperNHWC(OpMapper):
if expand_times[i] < 0: if expand_times[i] < 0:
expand_times[i] = 1 expand_times[i] = 1
attr = {"expand_times": expand_times} attr = {"expand_times": expand_times}
node.fluid_code.add_layer("expand", node.fluid_code.add_layer(
inputs=input, "expand", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Pack(self, node): def Pack(self, node):
inputs = [ inputs = [
self.graph.get_node(name, copy=True) for name in node.layer.input self.graph.get_node(
name, copy=True) for name in node.layer.input
] ]
axis = node.get_attr("axis") axis = node.get_attr("axis")
attr = {"axis": axis} attr = {"axis": axis}
node.fluid_code.add_layer("stack", node.fluid_code.add_layer(
inputs=inputs, "stack", inputs=inputs, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Pad(self, node): def Pad(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -695,30 +635,22 @@ class TFOpMapperNHWC(OpMapper): ...@@ -695,30 +635,22 @@ class TFOpMapperNHWC(OpMapper):
if new_padding is not None: if new_padding is not None:
if input.tf_data_format == "NHWC": if input.tf_data_format == "NHWC":
attr = {"perm": [0, 3, 1, 2]} attr = {"perm": [0, 3, 1, 2]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=input, "transpose", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
input = node input = node
attr = {"paddings": new_padding} attr = {"paddings": new_padding}
node.fluid_code.add_layer("pad2d", node.fluid_code.add_layer(
inputs=input, "pad2d", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
if input.tf_data_format == "NHWC": if input.tf_data_format == "NHWC":
attr = {"perm": [0, 2, 3, 1]} attr = {"perm": [0, 2, 3, 1]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=node, "transpose", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
return return
attr = {"paddings": paddings} attr = {"paddings": paddings}
node.fluid_code.add_layer("pad", node.fluid_code.add_layer(
inputs=input, "pad", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Range(self, node): def Range(self, node):
start = self.graph.get_node(node.layer.input[0], copy=True) start = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -746,10 +678,8 @@ class TFOpMapperNHWC(OpMapper): ...@@ -746,10 +678,8 @@ class TFOpMapperNHWC(OpMapper):
"step": delta, "step": delta,
} }
attr = {"dtype": string(node.dtype)} attr = {"dtype": string(node.dtype)}
node.fluid_code.add_layer("range", node.fluid_code.add_layer(
inputs=inputs, "range", inputs=inputs, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Mean(self, node): def Mean(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -759,10 +689,8 @@ class TFOpMapperNHWC(OpMapper): ...@@ -759,10 +689,8 @@ class TFOpMapperNHWC(OpMapper):
keep_dims = node.get_attr("keep_dims") keep_dims = node.get_attr("keep_dims")
attr = {"dim": dims, "keep_dim": keep_dims} attr = {"dim": dims, "keep_dim": keep_dims}
node.fluid_code.add_layer("reduce_mean", node.fluid_code.add_layer(
inputs=input, "reduce_mean", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def MatMul(self, node): def MatMul(self, node):
x = self.graph.get_node(node.layer.input[0], copy=True) x = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -776,15 +704,11 @@ class TFOpMapperNHWC(OpMapper): ...@@ -776,15 +704,11 @@ class TFOpMapperNHWC(OpMapper):
shape = x.out_shapes[0] shape = x.out_shapes[0]
shape[-1] = y.out_shapes[0][0] shape[-1] = y.out_shapes[0][0]
attr = {"shape": shape} attr = {"shape": shape}
node.fluid_code.add_layer("reshape", node.fluid_code.add_layer(
inputs=x, "reshape", inputs=x, output=x, param_attr=attr)
output=x,
param_attr=attr)
attr = {"transpose_x": transpose_a, "transpose_y": transpose_b} attr = {"transpose_x": transpose_a, "transpose_y": transpose_b}
node.fluid_code.add_layer("matmul", node.fluid_code.add_layer(
inputs=inputs, "matmul", inputs=inputs, output=node, param_attr=attr)
output=node,
param_attr=attr)
def ArgMax(self, node): def ArgMax(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -793,10 +717,8 @@ class TFOpMapperNHWC(OpMapper): ...@@ -793,10 +717,8 @@ class TFOpMapperNHWC(OpMapper):
self.add_omit_nodes(axis.layer_name, node.layer_name) self.add_omit_nodes(axis.layer_name, node.layer_name)
axis = axis.value axis = axis.value
attr = {"axis": axis} attr = {"axis": axis}
node.fluid_code.add_layer("argmax", node.fluid_code.add_layer(
inputs=input, "argmax", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def StridedSlice(self, node): def StridedSlice(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -862,25 +784,19 @@ class TFOpMapperNHWC(OpMapper): ...@@ -862,25 +784,19 @@ class TFOpMapperNHWC(OpMapper):
"starts": new_begin, "starts": new_begin,
"ends": new_end "ends": new_end
} }
node.fluid_code.add_layer("slice", node.fluid_code.add_layer(
inputs=input, "slice", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
if len(new_axes) > 0: if len(new_axes) > 0:
attr = {"axes": new_axes} attr = {"axes": new_axes}
node.fluid_code.add_layer("unsqueeze", node.fluid_code.add_layer(
inputs=node, "unsqueeze", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
if len(shrink_axes) > 0: if len(shrink_axes) > 0:
if len(input.out_shapes[0]) + len(new_axes) <= 1: if len(input.out_shapes[0]) + len(new_axes) <= 1:
pass pass
else: else:
attr = {"axes": shrink_axes} attr = {"axes": shrink_axes}
node.fluid_code.add_layer("squeeze", node.fluid_code.add_layer(
inputs=node, "squeeze", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Slice(self, node): def Slice(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -909,10 +825,8 @@ class TFOpMapperNHWC(OpMapper): ...@@ -909,10 +825,8 @@ class TFOpMapperNHWC(OpMapper):
"ends": size "ends": size
} }
node.fluid_code.add_layer("slice", node.fluid_code.add_layer(
inputs=input, "slice", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Conv2DBackpropInput(self, node): def Conv2DBackpropInput(self, node):
out_shape = self.graph.get_node(node.layer.input[0], copy=True) out_shape = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -950,10 +864,8 @@ class TFOpMapperNHWC(OpMapper): ...@@ -950,10 +864,8 @@ class TFOpMapperNHWC(OpMapper):
strides = [strides[i] for i in [0, 3, 1, 2]] strides = [strides[i] for i in [0, 3, 1, 2]]
dilations = [dilations[i] for i in [0, 3, 1, 2]] dilations = [dilations[i] for i in [0, 3, 1, 2]]
attr = {"perm": [0, 3, 1, 2]} attr = {"perm": [0, 3, 1, 2]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=input, "transpose", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
input = node input = node
else: else:
self.data_format_propagation(node) self.data_format_propagation(node)
...@@ -968,17 +880,13 @@ class TFOpMapperNHWC(OpMapper): ...@@ -968,17 +880,13 @@ class TFOpMapperNHWC(OpMapper):
"padding": string(pad_mode), "padding": string(pad_mode),
"output_size": out_shape[1:3] "output_size": out_shape[1:3]
} }
node.fluid_code.add_layer("conv2d_transpose", node.fluid_code.add_layer(
inputs=input, "conv2d_transpose", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
if not channel_first: if not channel_first:
attr = {"perm": [0, 2, 3, 1]} attr = {"perm": [0, 2, 3, 1]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=node, "transpose", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Max(self, node): def Max(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -988,10 +896,8 @@ class TFOpMapperNHWC(OpMapper): ...@@ -988,10 +896,8 @@ class TFOpMapperNHWC(OpMapper):
dim = reduce_idx.value.tolist() dim = reduce_idx.value.tolist()
attr = {"dim": dim, "keep_dim": keep_dims} attr = {"dim": dim, "keep_dim": keep_dims}
node.fluid_code.add_layer("reduce_max", node.fluid_code.add_layer(
inputs=input, "reduce_max", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Sum(self, node): def Sum(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -1001,19 +907,15 @@ class TFOpMapperNHWC(OpMapper): ...@@ -1001,19 +907,15 @@ class TFOpMapperNHWC(OpMapper):
dim = reduce_idx.value.tolist() dim = reduce_idx.value.tolist()
attr = {"dim": dim, "keep_dim": keep_dims} attr = {"dim": dim, "keep_dim": keep_dims}
node.fluid_code.add_layer("reduce_sum", node.fluid_code.add_layer(
inputs=input, "reduce_sum", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Cast(self, node): def Cast(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
dtype = node.dtype_map[node.get_attr('DstT')] dtype = node.dtype_map[node.get_attr('DstT')]
attr = {"dtype": string(dtype)} attr = {"dtype": string(dtype)}
node.fluid_code.add_layer("cast", node.fluid_code.add_layer(
inputs=input, "cast", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Split(self, node): def Split(self, node):
dim = self.graph.get_node(node.layer.input[0], copy=True) dim = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -1024,28 +926,22 @@ class TFOpMapperNHWC(OpMapper): ...@@ -1024,28 +926,22 @@ class TFOpMapperNHWC(OpMapper):
dim = dim.value dim = dim.value
attr = {"num_or_sections": num_split, "dim": dim} attr = {"num_or_sections": num_split, "dim": dim}
node.fluid_code.add_layer("split", node.fluid_code.add_layer(
inputs=input, "split", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Squeeze(self, node): def Squeeze(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
squeeze_dims = node.get_attr('squeeze_dims') squeeze_dims = node.get_attr('squeeze_dims')
attr = {"axes": squeeze_dims} attr = {"axes": squeeze_dims}
node.fluid_code.add_layer("squeeze", node.fluid_code.add_layer(
inputs=input, "squeeze", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def Softmax(self, node): def Softmax(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
axis = node.get_attr("axis") axis = node.get_attr("axis")
attr = {"axis": axis} attr = {"axis": axis}
node.fluid_code.add_layer("softmax", node.fluid_code.add_layer(
inputs=input, "softmax", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
def ResizeNearestNeighbor(self, node): def ResizeNearestNeighbor(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -1054,24 +950,18 @@ class TFOpMapperNHWC(OpMapper): ...@@ -1054,24 +950,18 @@ class TFOpMapperNHWC(OpMapper):
if resize_shape.layer_type == "Const": if resize_shape.layer_type == "Const":
resize_shape = resize_shape.value.tolist() resize_shape = resize_shape.value.tolist()
else: else:
resize_shape = self.decoder.infer_shape_tensor( resize_shape = self.decoder.infer_shape_tensor(resize_shape,
resize_shape, node.out_shapes[0]) node.out_shapes[0])
align_corners = node.get_attr("align_corners") align_corners = node.get_attr("align_corners")
attr = {"perm": [0, 3, 1, 2]} attr = {"perm": [0, 3, 1, 2]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=input, "transpose", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
attr = {"align_corners": align_corners, "out_shape": resize_shape} attr = {"align_corners": align_corners, "out_shape": resize_shape}
node.fluid_code.add_layer("resize_nearest", node.fluid_code.add_layer(
inputs=node, "resize_nearest", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
attr = {"perm": [0, 2, 3, 1]} attr = {"perm": [0, 2, 3, 1]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=node, "transpose", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
def ResizeBilinear(self, node): def ResizeBilinear(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -1080,37 +970,29 @@ class TFOpMapperNHWC(OpMapper): ...@@ -1080,37 +970,29 @@ class TFOpMapperNHWC(OpMapper):
if resize_shape.layer_type == "Const": if resize_shape.layer_type == "Const":
resize_shape = resize_shape.value.tolist() resize_shape = resize_shape.value.tolist()
else: else:
resize_shape = self.decoder.infer_shape_tensor( resize_shape = self.decoder.infer_shape_tensor(resize_shape,
resize_shape, node.out_shapes[0]) node.out_shapes[0])
align_corners = node.get_attr("align_corners") align_corners = node.get_attr("align_corners")
attr = {"perm": [0, 3, 1, 2]} attr = {"perm": [0, 3, 1, 2]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=input, "transpose", inputs=input, output=node, param_attr=attr)
output=node,
param_attr=attr)
attr = { attr = {
"align_corners": align_corners, "align_corners": align_corners,
"out_shape": resize_shape, "out_shape": resize_shape,
"align_mode": 1 "align_mode": 1
} }
node.fluid_code.add_layer("resize_bilinear", node.fluid_code.add_layer(
inputs=node, "resize_bilinear", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
attr = {"perm": [0, 2, 3, 1]} attr = {"perm": [0, 2, 3, 1]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs=node, "transpose", inputs=node, output=node, param_attr=attr)
output=node,
param_attr=attr)
def GreaterEqual(self, node): def GreaterEqual(self, node):
x = self.graph.get_node(node.layer.input[0], copy=True) x = self.graph.get_node(node.layer.input[0], copy=True)
y = self.graph.get_node(node.layer.input[1], copy=True) y = self.graph.get_node(node.layer.input[1], copy=True)
inputs = {"x": x, "y": y} inputs = {"x": x, "y": y}
node.fluid_code.add_layer("greater_equal", node.fluid_code.add_layer(
inputs=inputs, "greater_equal", inputs=inputs, output=node, param_attr=None)
output=node,
param_attr=None)
def RandomUniform(self, node): def RandomUniform(self, node):
shape = self.graph.get_node(node.layer.input[0], copy=True) shape = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -1123,29 +1005,24 @@ class TFOpMapperNHWC(OpMapper): ...@@ -1123,29 +1005,24 @@ class TFOpMapperNHWC(OpMapper):
if shape[0] < 0: if shape[0] < 0:
input = self.batch_node input = self.batch_node
node.fluid_code.add_layer("uniform_random_batch_size_like", node.fluid_code.add_layer(
inputs=input, "uniform_random_batch_size_like",
output=node, inputs=input,
param_attr=attr) output=node,
param_attr=attr)
else: else:
node.fluid_code.add_layer("uniform_random", node.fluid_code.add_layer(
inputs=None, "uniform_random", inputs=None, output=node, param_attr=attr)
output=node,
param_attr=attr)
def SquaredDifference(self, node): def SquaredDifference(self, node):
x = self.graph.get_node(node.layer.input[0], copy=True) x = self.graph.get_node(node.layer.input[0], copy=True)
y = self.graph.get_node(node.layer.input[1], copy=True) y = self.graph.get_node(node.layer.input[1], copy=True)
inputs = {"x": x, "y": y} inputs = {"x": x, "y": y}
node.fluid_code.add_layer("elementwise_sub", node.fluid_code.add_layer(
inputs=inputs, "elementwise_sub", inputs=inputs, output=node, param_attr=None)
output=node,
param_attr=None)
inputs = {"x": node, "y": node} inputs = {"x": node, "y": node}
node.fluid_code.add_layer("elementwise_mul", node.fluid_code.add_layer(
inputs=inputs, "elementwise_mul", inputs=inputs, output=node, param_attr=None)
output=node,
param_attr=None)
def ExpandDims(self, node): def ExpandDims(self, node):
x = self.graph.get_node(node.layer.input[0], copy=True) x = self.graph.get_node(node.layer.input[0], copy=True)
...@@ -1156,19 +1033,15 @@ class TFOpMapperNHWC(OpMapper): ...@@ -1156,19 +1033,15 @@ class TFOpMapperNHWC(OpMapper):
dim = self.decoder.infer_tensor(y) dim = self.decoder.infer_tensor(y)
self.add_omit_nodes(y.layer_name, node.layer_name) self.add_omit_nodes(y.layer_name, node.layer_name)
attr = {'axes': [dim]} attr = {'axes': [dim]}
node.fluid_code.add_layer("unsqueeze", node.fluid_code.add_layer(
inputs=x, "unsqueeze", inputs=x, output=node, param_attr=attr)
output=node,
param_attr=attr)
def BatchToSpaceND(self, node): def BatchToSpaceND(self, node):
x = self.graph.get_node(node.layer.input[0], copy=True) x = self.graph.get_node(node.layer.input[0], copy=True)
y = self.graph.get_node(node.layer.input[1], copy=True) y = self.graph.get_node(node.layer.input[1], copy=True)
if hasattr(node, 'skip') and node.skip: if hasattr(node, 'skip') and node.skip:
node.fluid_code.add_layer("=", node.fluid_code.add_layer(
inputs=x, "=", inputs=x, output=node, param_attr=None)
output=node,
param_attr=None)
else: else:
raise Exception("BatchToSpaceND is not supported") raise Exception("BatchToSpaceND is not supported")
...@@ -1176,9 +1049,7 @@ class TFOpMapperNHWC(OpMapper): ...@@ -1176,9 +1049,7 @@ class TFOpMapperNHWC(OpMapper):
x = self.graph.get_node(node.layer.input[0], copy=True) x = self.graph.get_node(node.layer.input[0], copy=True)
y = self.graph.get_node(node.layer.input[1], copy=True) y = self.graph.get_node(node.layer.input[1], copy=True)
if hasattr(node, 'skip') and node.skip: if hasattr(node, 'skip') and node.skip:
node.fluid_code.add_layer("=", node.fluid_code.add_layer(
inputs=x, "=", inputs=x, output=node, param_attr=None)
output=node,
param_attr=None)
else: else:
raise Exception("SpaceToBatchND is not supported") raise Exception("SpaceToBatchND is not supported")
...@@ -41,10 +41,11 @@ class CaffeOptimizer(object): ...@@ -41,10 +41,11 @@ class CaffeOptimizer(object):
if is_delete_node: if is_delete_node:
parent_node.fluid_code.clear() parent_node.fluid_code.clear()
node.fluid_code.clear() node.fluid_code.clear()
node.fluid_code.add_layer("batch_norm", node.fluid_code.add_layer(
inputs=input, "batch_norm",
output=node, inputs=input,
param_attr=parent_param_attr) output=node,
param_attr=parent_param_attr)
def merge_op_activation(self): def merge_op_activation(self):
for node_name in self.graph.topo_sort: for node_name in self.graph.topo_sort:
...@@ -62,7 +63,8 @@ class CaffeOptimizer(object): ...@@ -62,7 +63,8 @@ class CaffeOptimizer(object):
if is_delete_node: if is_delete_node:
parent_node.fluid_code.clear() parent_node.fluid_code.clear()
node.fluid_code.clear() node.fluid_code.clear()
node.fluid_code.add_layer(op, node.fluid_code.add_layer(
inputs=input, op,
output=node, inputs=input,
param_attr=parent_param_attr) output=node,
param_attr=parent_param_attr)
...@@ -554,10 +554,11 @@ class TFOptimizer(object): ...@@ -554,10 +554,11 @@ class TFOptimizer(object):
node.fluid_code.layers[0].param_attr["shape"] = shape node.fluid_code.layers[0].param_attr["shape"] = shape
node.fluid_code.layers[0].output = "nhwc_" + name node.fluid_code.layers[0].output = "nhwc_" + name
attr = {"perm": [0, 2, 3, 1]} attr = {"perm": [0, 2, 3, 1]}
node.fluid_code.add_layer("transpose", node.fluid_code.add_layer(
inputs="nhwc_" + name, "transpose",
output=node, inputs="nhwc_" + name,
param_attr=attr) output=node,
param_attr=attr)
self.graph.input_nodes[i] = "nhwc_" + name self.graph.input_nodes[i] = "nhwc_" + name
for i, name in enumerate(self.graph.output_nodes): for i, name in enumerate(self.graph.output_nodes):
node = self.graph.get_node(name) node = self.graph.get_node(name)
...@@ -767,8 +768,8 @@ class TFOptimizer(object): ...@@ -767,8 +768,8 @@ class TFOptimizer(object):
is_prelu = False is_prelu = False
continue continue
if len(in_nodes0[0].outputs) != 1 or len( if len(in_nodes0[0].outputs) != 1 or len(in_nodes0[1]
in_nodes0[1].outputs) != 1: .outputs) != 1:
is_prelu = False is_prelu = False
continue continue
...@@ -777,8 +778,8 @@ class TFOptimizer(object): ...@@ -777,8 +778,8 @@ class TFOptimizer(object):
self.graph.get_node(in_name) self.graph.get_node(in_name)
for in_name in in_nodes0[1].inputs for in_name in in_nodes0[1].inputs
] ]
if in_nodes2[1].layer_type != "Const" or numpy.fabs( if in_nodes2[1].layer_type != "Const" or numpy.fabs(in_nodes2[
in_nodes2[1].value - 0.5) > 1e-06: 1].value - 0.5) > 1e-06:
is_prelu = False is_prelu = False
continue continue
if in_nodes2[0].layer_type != "Mul": if in_nodes2[0].layer_type != "Mul":
...@@ -787,8 +788,8 @@ class TFOptimizer(object): ...@@ -787,8 +788,8 @@ class TFOptimizer(object):
if exist_act(in_nodes2[0]): if exist_act(in_nodes2[0]):
is_prelu = False is_prelu = False
continue continue
if len(in_nodes2[1].outputs) != 1 or len( if len(in_nodes2[1].outputs) != 1 or len(in_nodes2[0]
in_nodes2[0].outputs) != 1: .outputs) != 1:
is_prelu = False is_prelu = False
continue continue
...@@ -803,8 +804,8 @@ class TFOptimizer(object): ...@@ -803,8 +804,8 @@ class TFOptimizer(object):
if exist_act(in_nodes3[1]): if exist_act(in_nodes3[1]):
is_prelu = False is_prelu = False
continue continue
if len(in_nodes3[0].outputs) != 1 or len( if len(in_nodes3[0].outputs) != 1 or len(in_nodes3[1]
in_nodes3[1].outputs) != 1: .outputs) != 1:
is_prelu = False is_prelu = False
continue continue
...@@ -856,12 +857,12 @@ class TFOptimizer(object): ...@@ -856,12 +857,12 @@ class TFOptimizer(object):
mode = "element" mode = "element"
elif len(in_nodes3[0].value.shape) == 0: elif len(in_nodes3[0].value.shape) == 0:
mode = "all" mode = "all"
elif len(in_nodes3[0].value.shape elif len(in_nodes3[0].value.shape) == 1 and in_nodes3[
) == 1 and in_nodes3[0].value.shape[0] == 1: 0].value.shape[0] == 1:
mode = "all" mode = "all"
elif len(in_shape) == 4 and len( elif len(in_shape) == 4 and len(in_nodes3[
in_nodes3[0].value.shape 0].value.shape) == 1 and in_nodes3[0].value.shape[
) == 1 and in_nodes3[0].value.shape[0] == in_shape[-1]: 0] == in_shape[-1]:
mode = "channel" mode = "channel"
weight = self.op_mapper.weights[in_nodes3[0].layer_name] weight = self.op_mapper.weights[in_nodes3[0].layer_name]
weight = numpy.expand_dims(weight, 0) weight = numpy.expand_dims(weight, 0)
...@@ -916,14 +917,15 @@ class TFOptimizer(object): ...@@ -916,14 +917,15 @@ class TFOptimizer(object):
self.graph.get_node(in_name) for in_name in node.inputs self.graph.get_node(in_name) for in_name in node.inputs
] ]
if in_nodes0[0].layer_type != "Mul" or in_nodes0[ if in_nodes0[0].layer_type != "Mul" or in_nodes0[
1].layer_type != "Const" or in_nodes0[1].value.size != 1: 1].layer_type != "Const" or in_nodes0[
1].value.size != 1:
is_scale = False is_scale = False
continue continue
if exist_act(in_nodes0[0]): if exist_act(in_nodes0[0]):
is_scale = False is_scale = False
continue continue
if len(in_nodes0[0].outputs) != 1 or len( if len(in_nodes0[0].outputs) != 1 or len(in_nodes0[1]
in_nodes0[1].outputs) != 1: .outputs) != 1:
is_scale = False is_scale = False
continue continue
...@@ -939,8 +941,8 @@ class TFOptimizer(object): ...@@ -939,8 +941,8 @@ class TFOptimizer(object):
if exist_act(in_nodes1[1]): if exist_act(in_nodes1[1]):
is_scale = False is_scale = False
continue continue
if len(in_nodes1[0].outputs) != 1 or len( if len(in_nodes1[0].outputs) != 1 or len(in_nodes1[1]
in_nodes1[1].outputs) != 1: .outputs) != 1:
is_scale = False is_scale = False
continue continue
...@@ -962,8 +964,8 @@ class TFOptimizer(object): ...@@ -962,8 +964,8 @@ class TFOptimizer(object):
scale = 1.0 / in_nodes2[1].value * in_nodes1[0].value scale = 1.0 / in_nodes2[1].value * in_nodes1[0].value
act = None act = None
if node.fluid_code.layers[0].param_attr is not None: if node.fluid_code.layers[0].param_attr is not None:
act = node.fluid_code.layers[0].param_attr.get( act = node.fluid_code.layers[0].param_attr.get("act",
"act", None) None)
node.fluid_code.clear() node.fluid_code.clear()
attr = { attr = {
...@@ -972,10 +974,8 @@ class TFOptimizer(object): ...@@ -972,10 +974,8 @@ class TFOptimizer(object):
"bias_after_scale": True, "bias_after_scale": True,
"act": act "act": act
} }
node.fluid_code.add_layer("scale", node.fluid_code.add_layer(
inputs=in_node, "scale", inputs=in_node, output=node, param_attr=attr)
output=node,
param_attr=attr)
del self.graph.node_map[in_nodes0[0].layer_name] del self.graph.node_map[in_nodes0[0].layer_name]
del self.graph.node_map[in_nodes0[1].layer_name] del self.graph.node_map[in_nodes0[1].layer_name]
...@@ -1004,17 +1004,17 @@ class TFOptimizer(object): ...@@ -1004,17 +1004,17 @@ class TFOptimizer(object):
if exist_act(in_nodes0[0]): if exist_act(in_nodes0[0]):
is_affine_channel = False is_affine_channel = False
continue continue
if len(in_nodes0[0].outputs) != 1 or len( if len(in_nodes0[0].outputs) != 1 or len(in_nodes0[1]
in_nodes0[1].outputs) != 1: .outputs) != 1:
is_affine_channel = False is_affine_channel = False
continue continue
in_nodes1 = [ in_nodes1 = [
self.graph.get_node(in_name) self.graph.get_node(in_name)
for in_name in in_nodes0[0].inputs for in_name in in_nodes0[0].inputs
] ]
if len(in_nodes1[0].out_shapes[0] if len(in_nodes1[0].out_shapes[0]) != 4 or in_nodes1[
) != 4 or in_nodes1[1].layer_type != "Const" or len( 1].layer_type != "Const" or len(in_nodes1[1]
in_nodes1[1].value.shape) != 3: .value.shape) != 3:
is_affine_channel = False is_affine_channel = False
continue continue
if len(in_nodes1[1].outputs) != 1: if len(in_nodes1[1].outputs) != 1:
...@@ -1037,8 +1037,8 @@ class TFOptimizer(object): ...@@ -1037,8 +1037,8 @@ class TFOptimizer(object):
node.layer_type = "AffineChannel" node.layer_type = "AffineChannel"
node.inputs = [in_node.layer_name] node.inputs = [in_node.layer_name]
scale = 1.0 / in_nodes0[1].value.flatten() scale = 1.0 / in_nodes0[1].value.flatten()
bias = in_nodes1[1].value.flatten( bias = in_nodes1[1].value.flatten() / in_nodes0[
) / in_nodes0[1].value.flatten() 1].value.flatten()
if not bias_add: if not bias_add:
bias *= -1.0 bias *= -1.0
self.op_mapper.weights[node.layer_name + "_scale"] = scale self.op_mapper.weights[node.layer_name + "_scale"] = scale
...@@ -1046,8 +1046,8 @@ class TFOptimizer(object): ...@@ -1046,8 +1046,8 @@ class TFOptimizer(object):
act = None act = None
if node.fluid_code.layers[0].param_attr is not None: if node.fluid_code.layers[0].param_attr is not None:
act = node.fluid_code.layers[0].param_attr.get( act = node.fluid_code.layers[0].param_attr.get("act",
"act", None) None)
node.fluid_code.clear() node.fluid_code.clear()
attr = { attr = {
...@@ -1055,29 +1055,32 @@ class TFOptimizer(object): ...@@ -1055,29 +1055,32 @@ class TFOptimizer(object):
"shape": [channel], "shape": [channel],
"name": string(node.layer_name + "_scale") "name": string(node.layer_name + "_scale")
} }
node.fluid_code.add_layer("create_parameter", node.fluid_code.add_layer(
inputs=None, "create_parameter",
output=node.layer_name + "_scale", inputs=None,
param_attr=attr) output=node.layer_name + "_scale",
param_attr=attr)
attr = { attr = {
"dtype": string(scale.dtype), "dtype": string(scale.dtype),
"shape": [channel], "shape": [channel],
"name": string(node.layer_name + "_bias") "name": string(node.layer_name + "_bias")
} }
node.fluid_code.add_layer("create_parameter", node.fluid_code.add_layer(
inputs=None, "create_parameter",
output=node.layer_name + "_bias", inputs=None,
param_attr=attr) output=node.layer_name + "_bias",
param_attr=attr)
inputs = { inputs = {
"x": in_node, "x": in_node,
"scale": node.layer_name + "_scale", "scale": node.layer_name + "_scale",
"bias": node.layer_name + "_bias" "bias": node.layer_name + "_bias"
} }
attr = {"act": act} attr = {"act": act}
node.fluid_code.add_layer("affine_channel", node.fluid_code.add_layer(
inputs=inputs, "affine_channel",
output=node, inputs=inputs,
param_attr=attr) output=node,
param_attr=attr)
del self.graph.node_map[in_nodes0[0].layer_name] del self.graph.node_map[in_nodes0[0].layer_name]
del self.graph.node_map[in_nodes0[1].layer_name] del self.graph.node_map[in_nodes0[1].layer_name]
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
| ShuffleNet | [code](https://github.com/TropComplique/shufflenet-v2-tensorflow) |-| | ShuffleNet | [code](https://github.com/TropComplique/shufflenet-v2-tensorflow) |-|
| mNASNet | [code](https://github.com/tensorflow/tpu/tree/master/models/official/mnasnet) |-| | mNASNet | [code](https://github.com/tensorflow/tpu/tree/master/models/official/mnasnet) |-|
| EfficientNet | [code](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet) |-| | EfficientNet | [code](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet) |-|
| Inception_V3 | [code](https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v3.py) |-|
| Inception_V4 | [code](https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v4.py) |-| | Inception_V4 | [code](https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v4.py) |-|
| Inception_ResNet_V2 | [code](https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_resnet_v2.py) |-| | Inception_ResNet_V2 | [code](https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_resnet_v2.py) |-|
| VGG16 | [code](https://github.com/tensorflow/models/tree/master/research/slim/nets) |-| | VGG16 | [code](https://github.com/tensorflow/models/tree/master/research/slim/nets) |-|
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册