未验证 提交 76d7e3d6 编写于 作者: S SunAhong1993 提交者: GitHub

Merge pull request #2 from PaddlePaddle/develop

add
language: python
python:
- '2.7'
- '3.5'
- '3.6'
script:
- if [[ $TRAVIS_PYTHON_VERSION != 2.7 ]]; then /bin/bash ./tools/check_code_style.sh; fi
notifications:
email:
on_success: change
on_failure: always
...@@ -14,7 +14,7 @@ paddlepaddle >= 1.5.0 ...@@ -14,7 +14,7 @@ paddlepaddle >= 1.5.0
**按需安装以下依赖** **按需安装以下依赖**
tensorflow : tensorflow == 1.14.0 tensorflow : tensorflow == 1.14.0
caffe : caffe == 1.0.0 caffe :
onnx : onnx == 1.5.0 pytorch == 1.1.0 onnx : onnx == 1.5.0 pytorch == 1.1.0
## 安装 ## 安装
......
...@@ -10,5 +10,3 @@ ...@@ -10,5 +10,3 @@
| Normalize | [code](https://github.com/weiliu89/caffe/blob/ssd/src/caffe/layers/normalize_layer.cpp) | | Normalize | [code](https://github.com/weiliu89/caffe/blob/ssd/src/caffe/layers/normalize_layer.cpp) |
| ROIPooling | [code](https://github.com/rbgirshick/caffe-fast-rcnn/blob/0dcd397b29507b8314e252e850518c5695efbb83/src/caffe/layers/roi_pooling_layer.cpp) | | ROIPooling | [code](https://github.com/rbgirshick/caffe-fast-rcnn/blob/0dcd397b29507b8314e252e850518c5695efbb83/src/caffe/layers/roi_pooling_layer.cpp) |
| Axpy | [code](https://github.com/hujie-frank/SENet/blob/master/src/caffe/layers/axpy_layer.cpp) | | Axpy | [code](https://github.com/hujie-frank/SENet/blob/master/src/caffe/layers/axpy_layer.cpp) |
#!/bin/bash
function abort(){
echo "Your change doesn't follow X2Paddle's code style." 1>&2
echo "Please use pre-commit to check what is wrong." 1>&2
exit 1
}
trap 'abort' 0
set -e
cd $TRAVIS_BUILD_DIR
export PATH=/usr/bin:$PATH
pre-commit install
if ! pre-commit run -a ; then
git diff
exit 1
fi
trap : 0
import urllib from six.moves import urllib
import sys import sys
from paddle.fluid.framework import Program from paddle.fluid.framework import Program
......
__version__ = "0.4.5" __version__ = "0.5.0"
...@@ -106,6 +106,8 @@ def tf2paddle(model_path, ...@@ -106,6 +106,8 @@ def tf2paddle(model_path,
# optimizer below is experimental # optimizer below is experimental
optimizer.merge_activation() optimizer.merge_activation()
optimizer.merge_bias() optimizer.merge_bias()
optimizer.merge_batch_norm()
optimizer.merge_prelu()
else: else:
mapper = TFOpMapperNHWC(model) mapper = TFOpMapperNHWC(model)
optimizer = TFOptimizer(mapper) optimizer = TFOptimizer(mapper)
...@@ -177,6 +179,9 @@ def main(): ...@@ -177,6 +179,9 @@ def main():
x2paddle.__version__)) x2paddle.__version__))
return return
assert args.framework is not None, "--framework is not defined(support tensorflow/caffe/onnx)"
assert args.save_dir is not None, "--save_dir is not defined"
try: try:
import paddle import paddle
v0, v1, v2 = paddle.__version__.split('.') v0, v1, v2 = paddle.__version__.split('.')
...@@ -185,8 +190,6 @@ def main(): ...@@ -185,8 +190,6 @@ def main():
return return
except: except:
print("paddlepaddle not installed, use \"pip install paddlepaddle\"") print("paddlepaddle not installed, use \"pip install paddlepaddle\"")
assert args.framework is not None, "--framework is not defined(support tensorflow/caffe/onnx)"
assert args.save_dir is not None, "--save_dir is not defined"
if args.framework == "tensorflow": if args.framework == "tensorflow":
assert args.model is not None, "--model should be defined while translating tensorflow model" assert args.model is not None, "--model should be defined while translating tensorflow model"
......
...@@ -13,8 +13,9 @@ ...@@ -13,8 +13,9 @@
# limitations under the License. # limitations under the License.
from x2paddle.core.graph import GraphNode from x2paddle.core.graph import GraphNode
import collections
from x2paddle.core.util import * from x2paddle.core.util import *
import collections
import six
class Layer(object): class Layer(object):
...@@ -28,7 +29,7 @@ class Layer(object): ...@@ -28,7 +29,7 @@ class Layer(object):
def get_code(self): def get_code(self):
layer_code = "" layer_code = ""
if self.output is not None: if self.output is not None:
if isinstance(self.output, str): if isinstance(self.output, six.string_types):
layer_code = self.output + " = " layer_code = self.output + " = "
else: else:
layer_code = self.output.layer_name + " = " layer_code = self.output.layer_name + " = "
...@@ -47,7 +48,7 @@ class Layer(object): ...@@ -47,7 +48,7 @@ class Layer(object):
"[{}]".format(input.index) + ", ") "[{}]".format(input.index) + ", ")
else: else:
in_list += (input.layer_name + ", ") in_list += (input.layer_name + ", ")
elif isinstance(input, str): elif isinstance(input, six.string_types):
in_list += (input + ", ") in_list += (input + ", ")
else: else:
raise Exception( raise Exception(
...@@ -72,7 +73,7 @@ class Layer(object): ...@@ -72,7 +73,7 @@ class Layer(object):
"[{}]".format(self.inputs.index) + ", ") "[{}]".format(self.inputs.index) + ", ")
else: else:
layer_code += (self.inputs.layer_name + ", ") layer_code += (self.inputs.layer_name + ", ")
elif isinstance(self.inputs, str): elif isinstance(self.inputs, six.string_types):
layer_code += (self.inputs + ", ") layer_code += (self.inputs + ", ")
else: else:
raise Exception("Unknown type of inputs.") raise Exception("Unknown type of inputs.")
...@@ -119,6 +120,6 @@ class FluidCode(object): ...@@ -119,6 +120,6 @@ class FluidCode(object):
for layer in self.layers: for layer in self.layers:
if isinstance(layer, Layer): if isinstance(layer, Layer):
codes.append(layer.get_code()) codes.append(layer.get_code())
elif isinstance(layer, str): elif isinstance(layer, six.string_types):
codes.append(layer) codes.append(layer)
return codes return codes
...@@ -12,6 +12,8 @@ ...@@ -12,6 +12,8 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from __future__ import print_function
from __future__ import division
import collections import collections
import copy as cp import copy as cp
...@@ -98,8 +100,3 @@ class Graph(object): ...@@ -98,8 +100,3 @@ class Graph(object):
raise Exception("node[{}] not in graph".format(dst)) raise Exception("node[{}] not in graph".format(dst))
self.node_map[dst].inputs.append(src) self.node_map[dst].inputs.append(src)
self.node_map[src].outputs.append(dst) self.node_map[src].outputs.append(dst)
def print(self):
for i, tmp in enumerate(self.topo_sort):
print(tmp, self.node_map[tmp].layer_type, self.node_map[tmp].inputs,
self.node_map[tmp].outputs)
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import paddle.fluid as fluid
from paddle.fluid.proto import framework_pb2 from paddle.fluid.proto import framework_pb2
from x2paddle.core.util import * from x2paddle.core.util import *
import inspect import inspect
...@@ -46,6 +47,28 @@ def export_paddle_param(param, param_name, dir): ...@@ -46,6 +47,28 @@ def export_paddle_param(param, param_name, dir):
fp.close() fp.close()
# This func will copy to generate code file
def run_net(param_dir="./"):
import os
inputs, outputs = x2paddle_net()
for i, out in enumerate(outputs):
if isinstance(out, list):
for out_part in out:
outputs.append(out_part)
del outputs[i]
exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())
def if_exist(var):
b = os.path.exists(os.path.join(param_dir, var.name))
return b
fluid.io.load_vars(exe,
param_dir,
fluid.default_main_program(),
predicate=if_exist)
class OpMapper(object): class OpMapper(object):
def __init__(self): def __init__(self):
self.paddle_codes = "" self.paddle_codes = ""
......
...@@ -11,8 +11,6 @@ ...@@ -11,8 +11,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import paddle.fluid as fluid
import numpy import numpy
import math import math
import os import os
...@@ -20,25 +18,3 @@ import os ...@@ -20,25 +18,3 @@ import os
def string(param): def string(param):
return "\'{}\'".format(param) return "\'{}\'".format(param)
# This func will copy to generate code file
def run_net(param_dir="./"):
import os
inputs, outputs = x2paddle_net()
for i, out in enumerate(outputs):
if isinstance(out, list):
for out_part in out:
outputs.append(out_part)
del outputs[i]
exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())
def if_exist(var):
b = os.path.exists(os.path.join(param_dir, var.name))
return b
fluid.io.load_vars(exe,
param_dir,
fluid.default_main_program(),
predicate=if_exist)
...@@ -236,11 +236,7 @@ class CaffeDecoder(object): ...@@ -236,11 +236,7 @@ class CaffeDecoder(object):
data.MergeFromString(open(self.model_path, 'rb').read()) data.MergeFromString(open(self.model_path, 'rb').read())
pair = lambda layer: (layer.name, self.normalize_pb_data(layer)) pair = lambda layer: (layer.name, self.normalize_pb_data(layer))
layers = data.layers or data.layer layers = data.layers or data.layer
import time
start = time.time()
self.params = [pair(layer) for layer in layers if layer.blobs] self.params = [pair(layer) for layer in layers if layer.blobs]
end = time.time()
print('cost:', str(end - start))
def normalize_pb_data(self, layer): def normalize_pb_data(self, layer):
transformed = [] transformed = []
......
此差异已折叠。
...@@ -135,7 +135,8 @@ class CaffeOpMapper(OpMapper): ...@@ -135,7 +135,8 @@ class CaffeOpMapper(OpMapper):
if isinstance(params.kernel_size, numbers.Number): if isinstance(params.kernel_size, numbers.Number):
[k_h, k_w] = [params.kernel_size] * 2 [k_h, k_w] = [params.kernel_size] * 2
elif len(params.kernel_size) > 0: elif len(params.kernel_size) > 0:
k_h = params.kernel_h if params.kernel_h > 0 else params.kernel_size[0] k_h = params.kernel_h if params.kernel_h > 0 else params.kernel_size[
0]
k_w = params.kernel_w if params.kernel_w > 0 else params.kernel_size[ k_w = params.kernel_w if params.kernel_w > 0 else params.kernel_size[
len(params.kernel_size) - 1] len(params.kernel_size) - 1]
elif params.kernel_h > 0 or params.kernel_w > 0: elif params.kernel_h > 0 or params.kernel_w > 0:
...@@ -156,8 +157,8 @@ class CaffeOpMapper(OpMapper): ...@@ -156,8 +157,8 @@ class CaffeOpMapper(OpMapper):
[p_h, p_w] = [params.pad] * 2 [p_h, p_w] = [params.pad] * 2
elif len(params.pad) > 0: elif len(params.pad) > 0:
p_h = params.pad_h if params.pad_h > 0 else params.pad[0] p_h = params.pad_h if params.pad_h > 0 else params.pad[0]
p_w = params.pad_w if params.pad_w > 0 else params.pad[len(params.pad) - p_w = params.pad_w if params.pad_w > 0 else params.pad[
1] len(params.pad) - 1]
elif params.pad_h > 0 or params.pad_w > 0: elif params.pad_h > 0 or params.pad_w > 0:
p_h = params.pad_h p_h = params.pad_h
p_w = params.pad_w p_w = params.pad_w
...@@ -225,12 +226,17 @@ class CaffeOpMapper(OpMapper): ...@@ -225,12 +226,17 @@ class CaffeOpMapper(OpMapper):
node.layer_type, params) node.layer_type, params)
if data is None: if data is None:
data = [] data = []
print('The parameter of {} (type is {}) is not set. So we set the parameters as 0'.format( print(
node.layer_name, node.layer_type)) 'The parameter of {} (type is {}) is not set. So we set the parameters as 0'
.format(node.layer_name, node.layer_type))
input_c = node.input_shape[0][1] input_c = node.input_shape[0][1]
output_c = channel output_c = channel
data.append(np.zeros([output_c, input_c, kernel[0], kernel[1]]).astype('float32')) data.append(
data.append(np.zeros([output_c,])).astype('float32') np.zeros([output_c, input_c, kernel[0],
kernel[1]]).astype('float32'))
data.append(np.zeros([
output_c,
])).astype('float32')
else: else:
data = self.adjust_parameters(node) data = self.adjust_parameters(node)
self.weights[node.layer_name + '_weights'] = data[0] self.weights[node.layer_name + '_weights'] = data[0]
...@@ -272,12 +278,17 @@ class CaffeOpMapper(OpMapper): ...@@ -272,12 +278,17 @@ class CaffeOpMapper(OpMapper):
node.layer_type, params) node.layer_type, params)
if data is None: if data is None:
data = [] data = []
print('The parameter of {} (type is {}) is not set. So we set the parameters as 0'.format( print(
node.layer_name, node.layer_type)) 'The parameter of {} (type is {}) is not set. So we set the parameters as 0'
.format(node.layer_name, node.layer_type))
input_c = node.input_shape[0][1] input_c = node.input_shape[0][1]
output_c = channel output_c = channel
data.append(np.zeros([output_c, input_c, kernel[0], kernel[1]]).astype('float32')) data.append(
data.append(np.zeros([output_c,]).astype('float32')) np.zeros([output_c, input_c, kernel[0],
kernel[1]]).astype('float32'))
data.append(np.zeros([
output_c,
]).astype('float32'))
else: else:
data = self.adjust_parameters(node) data = self.adjust_parameters(node)
self.weights[node.layer_name + '_weights'] = data[0] self.weights[node.layer_name + '_weights'] = data[0]
...@@ -369,13 +380,17 @@ class CaffeOpMapper(OpMapper): ...@@ -369,13 +380,17 @@ class CaffeOpMapper(OpMapper):
data = node.data data = node.data
params = node.layer.inner_product_param params = node.layer.inner_product_param
if data is None: if data is None:
print('The parameter of {} (type is {}) is not set. So we set the parameters as 0.'.format( print(
node.layer_name, node.layer_type)) 'The parameter of {} (type is {}) is not set. So we set the parameters as 0.'
.format(node.layer_name, node.layer_type))
input_c = node.input_shape[0][1] input_c = node.input_shape[0][1]
output_c = params.num_output output_c = params.num_output
data = [] data = []
data.append(np.zeros([input_c, output_c]).astype('float32').astype('float32')) data.append(
data.append(np.zeros([output_c]).astype('float32').astype('float32')) np.zeros([input_c,
output_c]).astype('float32').astype('float32'))
data.append(
np.zeros([output_c]).astype('float32').astype('float32'))
else: else:
data = self.adjust_parameters(node) data = self.adjust_parameters(node)
# Reshape the parameters to Paddle's ordering # Reshape the parameters to Paddle's ordering
...@@ -616,7 +631,8 @@ class CaffeOpMapper(OpMapper): ...@@ -616,7 +631,8 @@ class CaffeOpMapper(OpMapper):
param_attr=attr) param_attr=attr)
def BatchNorm(self, node): def BatchNorm(self, node):
assert len(node.inputs) == 1, 'The count of BatchNorm node\'s input is not 1.' assert len(
node.inputs) == 1, 'The count of BatchNorm node\'s input is not 1.'
input = self.graph.get_bottom_node(node, idx=0, copy=True) input = self.graph.get_bottom_node(node, idx=0, copy=True)
params = node.layer.batch_norm_param params = node.layer.batch_norm_param
if hasattr(params, 'eps'): if hasattr(params, 'eps'):
...@@ -624,11 +640,16 @@ class CaffeOpMapper(OpMapper): ...@@ -624,11 +640,16 @@ class CaffeOpMapper(OpMapper):
else: else:
eps = 1e-5 eps = 1e-5
if node.data is None or len(node.data) != 3: if node.data is None or len(node.data) != 3:
print('The parameter of {} (type is {}) is not set. So we set the parameters as 0'.format( print(
node.layer_name, node.layer_type)) 'The parameter of {} (type is {}) is not set. So we set the parameters as 0'
.format(node.layer_name, node.layer_type))
input_c = node.input_shape[0][1] input_c = node.input_shape[0][1]
mean = np.zeros([input_c,]).astype('float32') mean = np.zeros([
variance = np.zeros([input_c,]).astype('float32') input_c,
]).astype('float32')
variance = np.zeros([
input_c,
]).astype('float32')
scale = 0 scale = 0
else: else:
node.data = [np.squeeze(i) for i in node.data] node.data = [np.squeeze(i) for i in node.data]
...@@ -655,11 +676,16 @@ class CaffeOpMapper(OpMapper): ...@@ -655,11 +676,16 @@ class CaffeOpMapper(OpMapper):
def Scale(self, node): def Scale(self, node):
if node.data is None: if node.data is None:
print('The parameter of {} (type is {}) is not set. So we set the parameters as 0'.format( print(
node.layer_name, node.layer_type)) 'The parameter of {} (type is {}) is not set. So we set the parameters as 0'
.format(node.layer_name, node.layer_type))
input_c = node.input_shape[0][1] input_c = node.input_shape[0][1]
self.weights[node.layer_name + '_scale'] = np.zeros([input_c,]).astype('float32') self.weights[node.layer_name + '_scale'] = np.zeros([
self.weights[node.layer_name + '_offset'] = np.zeros([input_c,]).astype('float32') input_c,
]).astype('float32')
self.weights[node.layer_name + '_offset'] = np.zeros([
input_c,
]).astype('float32')
else: else:
self.weights[node.layer_name + '_scale'] = np.squeeze(node.data[0]) self.weights[node.layer_name + '_scale'] = np.squeeze(node.data[0])
self.weights[node.layer_name + '_offset'] = np.squeeze(node.data[1]) self.weights[node.layer_name + '_offset'] = np.squeeze(node.data[1])
......
...@@ -43,7 +43,8 @@ def get_kernel_parameters(params): ...@@ -43,7 +43,8 @@ def get_kernel_parameters(params):
[p_h, p_w] = [params.pad] * 2 [p_h, p_w] = [params.pad] * 2
elif len(params.pad) > 0: elif len(params.pad) > 0:
p_h = params.pad_h if params.pad_h > 0 else params.pad[0] p_h = params.pad_h if params.pad_h > 0 else params.pad[0]
p_w = params.pad_w if params.pad_w > 0 else params.pad[len(params.pad) - 1] p_w = params.pad_w if params.pad_w > 0 else params.pad[len(params.pad) -
1]
elif params.pad_h > 0 or params.pad_w > 0: elif params.pad_h > 0 or params.pad_w > 0:
p_h = params.pad_h p_h = params.pad_h
p_w = params.pad_w p_w = params.pad_w
......
...@@ -94,7 +94,7 @@ class ONNXOpMapper(OpMapper): ...@@ -94,7 +94,7 @@ class ONNXOpMapper(OpMapper):
print(op) print(op)
return False return False
def directly_map(self, node, *args, name='', **kwargs): def directly_map(self, node, name='', *args, **kwargs):
inputs = node.layer.input inputs = node.layer.input
outputs = node.layer.output outputs = node.layer.output
op_type = node.layer_type op_type = node.layer_type
......
...@@ -24,6 +24,8 @@ import sys ...@@ -24,6 +24,8 @@ import sys
def get_same_padding(in_size, kernel_size, stride): def get_same_padding(in_size, kernel_size, stride):
new_size = int(math.ceil(in_size * 1.0 / stride)) new_size = int(math.ceil(in_size * 1.0 / stride))
pad_size = (new_size - 1) * stride + kernel_size - in_size pad_size = (new_size - 1) * stride + kernel_size - in_size
if pad_size < 0:
pad_size = 0
pad0 = int(pad_size / 2) pad0 = int(pad_size / 2)
pad1 = pad_size - pad0 pad1 = pad_size - pad0
return [pad0, pad1] return [pad0, pad1]
...@@ -369,6 +371,7 @@ class TFOpMapper(OpMapper): ...@@ -369,6 +371,7 @@ class TFOpMapper(OpMapper):
pad_w = get_same_padding(in_shape[3], k_size[3], strides[3]) pad_w = get_same_padding(in_shape[3], k_size[3], strides[3])
pad_h = pad_h[0] + pad_h[1] pad_h = pad_h[0] + pad_h[1]
pad_w = pad_w[0] + pad_w[1] pad_w = pad_w[0] + pad_w[1]
if pad_h != 0 or pad_w != 0:
attr = {"paddings": [0, pad_h, 0, pad_w], "pad_value": -10000.0} attr = {"paddings": [0, pad_h, 0, pad_w], "pad_value": -10000.0}
node.fluid_code.add_layer("pad2d", node.fluid_code.add_layer("pad2d",
inputs=input, inputs=input,
...@@ -551,6 +554,7 @@ class TFOpMapper(OpMapper): ...@@ -551,6 +554,7 @@ class TFOpMapper(OpMapper):
def Reshape(self, node): def Reshape(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
param = self.graph.get_node(node.layer.input[1], copy=True) param = self.graph.get_node(node.layer.input[1], copy=True)
is_variable = False
if param.layer_type == "Const": if param.layer_type == "Const":
attr = {"shape": param.value.tolist()} attr = {"shape": param.value.tolist()}
self.add_omit_nodes(param.layer_name, node.layer_name) self.add_omit_nodes(param.layer_name, node.layer_name)
...@@ -582,6 +586,24 @@ class TFOpMapper(OpMapper): ...@@ -582,6 +586,24 @@ class TFOpMapper(OpMapper):
new_param += (node.layer_name + "[{}]".format(i) + ", ") new_param += (node.layer_name + "[{}]".format(i) + ", ")
new_param = new_param.strip(", ") + "]" new_param = new_param.strip(", ") + "]"
attr = {"shape": new_param} attr = {"shape": new_param}
is_variable = True
# to change [192, -1]->[-1, 192], allways put -1 in the first dimension
# optimization for Paddle-Lite
in_shape = input.out_shapes[0]
if is_variable and in_shape.count(-1) < 1:
total_size = 1
for i in range(len(in_shape)):
total_size *= in_shape[i]
for i in range(len(attr["shape"])):
if attr["shape"][i] == 0:
attr["shape"][i] = in_shape[i]
if attr["shape"][i] != -1:
total_size /= attr["shape"][i]
if attr["shape"].count(-1) > 0:
index = attr["shape"].index(-1)
attr["shape"][index] = int(total_size)
attr["shape"][0] = -1
if len(input.out_shapes[0]) == 4 and node.tf_data_format == "NHWC": if len(input.out_shapes[0]) == 4 and node.tf_data_format == "NHWC":
if len(attr["shape"]) < 3: if len(attr["shape"]) < 3:
......
...@@ -24,6 +24,8 @@ import sys ...@@ -24,6 +24,8 @@ import sys
def get_same_padding(in_size, kernel_size, stride): def get_same_padding(in_size, kernel_size, stride):
new_size = int(math.ceil(in_size * 1.0 / stride)) new_size = int(math.ceil(in_size * 1.0 / stride))
pad_size = (new_size - 1) * stride + kernel_size - in_size pad_size = (new_size - 1) * stride + kernel_size - in_size
if pad_size < 0:
pad_size = 0
pad0 = int(pad_size / 2) pad0 = int(pad_size / 2)
pad1 = pad_size - pad0 pad1 = pad_size - pad0
return [pad0, pad1] return [pad0, pad1]
...@@ -500,6 +502,7 @@ class TFOpMapperNHWC(OpMapper): ...@@ -500,6 +502,7 @@ class TFOpMapperNHWC(OpMapper):
def Reshape(self, node): def Reshape(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
param = self.graph.get_node(node.layer.input[1], copy=True) param = self.graph.get_node(node.layer.input[1], copy=True)
is_variable = False
if param.layer_type == "Const": if param.layer_type == "Const":
attr = {"shape": param.value.tolist()} attr = {"shape": param.value.tolist()}
self.add_omit_nodes(param.layer_name, node.layer_name) self.add_omit_nodes(param.layer_name, node.layer_name)
...@@ -527,6 +530,24 @@ class TFOpMapperNHWC(OpMapper): ...@@ -527,6 +530,24 @@ class TFOpMapperNHWC(OpMapper):
new_param += (node.layer_name + "[{}]".format(i) + ", ") new_param += (node.layer_name + "[{}]".format(i) + ", ")
new_param = new_param.strip(", ") + "]" new_param = new_param.strip(", ") + "]"
attr = {"shape": new_param} attr = {"shape": new_param}
is_variable = True
# to change [192, -1]->[-1, 192], allways put -1 in the first dimension
# optimization for Paddle-Lite
in_shape = input.out_shapes[0]
if not is_variable and in_shape.count(-1) < 1:
total_size = 1
for i in range(len(in_shape)):
total_size *= in_shape[i]
for i in range(len(attr["shape"])):
if attr["shape"][i] == 0:
attr["shape"][i] = in_shape[i]
if attr["shape"][i] != -1:
total_size /= attr["shape"][i]
if attr["shape"].count(-1) > 0:
index = attr["shape"].index(-1)
attr["shape"][index] = int(total_size)
attr["shape"][0] = -1
node.fluid_code.add_layer("reshape", node.fluid_code.add_layer("reshape",
inputs=input, inputs=input,
output=node, output=node,
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
from x2paddle.op_mapper.tf_op_mapper import TFOpMapper from x2paddle.op_mapper.tf_op_mapper import TFOpMapper
from x2paddle.core.fluid_code import Layer from x2paddle.core.fluid_code import Layer
from x2paddle.core.util import * from x2paddle.core.util import *
import numpy
import copy as cp import copy as cp
...@@ -351,3 +352,311 @@ class TFOptimizer(object): ...@@ -351,3 +352,311 @@ class TFOptimizer(object):
if node.fluid_code.layers[-1].op == "transpose": if node.fluid_code.layers[-1].op == "transpose":
node.fluid_code.layers[-2].output = name node.fluid_code.layers[-2].output = name
del node.fluid_code.layers[-1] del node.fluid_code.layers[-1]
def merge_batch_norm(self):
for i, name in enumerate(self.graph.topo_sort):
node = self.graph.get_node(name)
if node is None:
continue
is_batch_norm = True
if node.layer_type == "Add":
in_nodes0 = [
self.graph.get_node(in_name) for in_name in node.inputs
]
if in_nodes0[0].layer_type != "Mul" or in_nodes0[
1].layer_type != "Sub":
is_batch_norm = False
continue
in_nodes1 = [
self.graph.get_node(in_name)
for in_name in in_nodes0[0].inputs
]
in_nodes2 = [
self.graph.get_node(in_name)
for in_name in in_nodes0[1].inputs
]
if len(in_nodes1[0].out_shapes[0]) != 4:
is_batch_norm = False
continue
if in_nodes1[1].layer_type != "Mul":
is_batch_norm = False
continue
if in_nodes2[0].layer_type != "Const" or in_nodes2[
1].layer_type != "Mul":
is_batch_norm = False
continue
in_nodes3 = [
self.graph.get_node(in_name)
for in_name in in_nodes1[1].inputs
]
if in_nodes3[0].layer_type != "Rsqrt" or in_nodes3[
1].layer_type != "Const":
is_batch_norm = False
continue
in_nodes4 = [
self.graph.get_node(in_name)
for in_name in in_nodes2[1].inputs
]
if in_nodes4[0].layer_type != "Const" or in_nodes4[
1].layer_name != in_nodes1[1].layer_name:
is_batch_norm = False
continue
in_nodes5 = self.graph.get_node(in_nodes3[0].inputs[0])
if in_nodes5.layer_type != "Add":
is_batch_norm = False
continue
in_nodes6 = [
self.graph.get_node(in_name) for in_name in in_nodes5.inputs
]
if in_nodes6[0].layer_type != "Const" or in_nodes6[
1].layer_type != "Const":
is_batch_norm = False
continue
if len(in_nodes0[0].outputs) != 1:
is_batch_norm = False
continue
if len(in_nodes0[1].outputs) != 1:
is_batch_norm = False
continue
if len(in_nodes1[1].outputs) != 2:
is_batch_norm = False
continue
if len(in_nodes2[0].outputs) != 1:
is_batch_norm = False
continue
if len(in_nodes2[1].outputs) != 1:
is_batch_norm = False
continue
if len(in_nodes3[0].outputs) != 1:
is_batch_norm = False
continue
if len(in_nodes3[1].outputs) != 1:
is_batch_norm = False
continue
if len(in_nodes4[0].outputs) != 1:
is_batch_norm = False
continue
if len(in_nodes5.outputs) != 1:
is_batch_norm = False
continue
if len(in_nodes6[0].outputs) != 1:
is_batch_norm = False
continue
if len(in_nodes6[1].outputs) != 1:
is_batch_norm = False
continue
conv_shape = in_nodes1[0].out_shapes[0]
if conv_shape[3] < 0:
is_batch_norm = False
continue
# moving_variance
if in_nodes6[0].value.size != conv_shape[3]:
is_batch_norm = False
continue
# epsilon
if in_nodes6[1].value.size != 1:
is_batch_norm = False
continue
# gamma
if in_nodes3[1].value.size != conv_shape[3]:
is_batch_norm = False
continue
# moving_mean
if in_nodes4[0].value.size != conv_shape[3]:
is_batch_norm = False
continue
# beta
if in_nodes2[0].value.size != conv_shape[3]:
is_batch_norm = False
continue
if is_batch_norm:
index = in_nodes1[0].outputs.index(in_nodes0[0].layer_name)
del in_nodes1[0].outputs[index]
node.layer_type = "FusedBatchNorm"
node.inputs = [in_nodes1[0].layer_name]
node.outputs = node.outputs
act = node.fluid_code.layers[-1].param_attr.get("act", None)
node.fluid_code.clear()
attr = {
"epsilon": in_nodes6[1].value,
"param_attr": string(in_nodes3[1].layer_name),
"bias_attr": string(in_nodes2[0].layer_name),
"moving_mean_name": string(in_nodes4[0].layer_name),
"moving_variance_name": string(in_nodes6[0].layer_name),
"is_test": True,
"act": act
}
node.fluid_code.add_layer(
"batch_norm",
inputs=in_nodes1[0].fluid_code.layers[-1].output,
output=node,
param_attr=attr)
del self.graph.node_map[in_nodes0[0].layer_name]
del self.graph.node_map[in_nodes0[1].layer_name]
del self.graph.node_map[in_nodes1[1].layer_name]
del self.graph.node_map[in_nodes2[1].layer_name]
del self.graph.node_map[in_nodes3[0].layer_name]
del self.graph.node_map[in_nodes4[0].layer_name]
del self.graph.node_map[in_nodes5.layer_name]
def merge_prelu(self):
for i, name in enumerate(self.graph.topo_sort):
node = self.graph.get_node(name)
if node is None:
continue
is_prelu = True
if node.layer_type == "Add":
in_nodes0 = [
self.graph.get_node(in_name) for in_name in node.inputs
]
if in_nodes0[0].layer_type != "Relu" or in_nodes0[
1].layer_type != "Mul":
is_prelu = False
continue
if len(in_nodes0[0].outputs) != 1 or len(
in_nodes0[1].outputs) != 1:
is_prelu = False
continue
in_nodes1 = self.graph.get_node(in_nodes0[0].inputs[0])
in_nodes2 = [
self.graph.get_node(in_name)
for in_name in in_nodes0[1].inputs
]
if in_nodes2[1].layer_type != "Const" or numpy.fabs(
in_nodes2[1].value - 0.5) > 1e-06:
is_prelu = False
continue
if in_nodes2[0].layer_type != "Mul":
is_prelu = False
continue
if len(in_nodes2[1].outputs) != 1 or len(
in_nodes2[0].outputs) != 1:
is_prelu = False
continue
in_nodes3 = [
self.graph.get_node(in_name)
for in_name in in_nodes2[0].inputs
]
if in_nodes3[0].layer_type != "Const" or in_nodes3[
1].layer_type != "Sub":
is_prelu = False
continue
if len(in_nodes3[0].outputs) != 1 or len(
in_nodes3[1].outputs) != 1:
is_prelu = False
continue
in_nodes4 = [
self.graph.get_node(in_name)
for in_name in in_nodes3[1].inputs
]
if in_nodes4[0].layer_name != in_nodes1.layer_name or in_nodes4[
1].layer_type != "Abs":
is_prelu = False
continue
if len(in_nodes4[1].outputs) != 1:
is_prelu = False
continue
in_nodes5 = self.graph.get_node(in_nodes4[1].inputs[0])
if in_nodes5.layer_name != in_nodes1.layer_name:
is_prelu = False
continue
if len(in_nodes0[0].outputs) != 1:
is_prelu = false
continue
if len(in_nodes0[1].outputs) != 1:
is_prelu = False
continue
if len(in_nodes1.outputs) < 3:
is_prelu = False
continue
if len(in_nodes2[0].outputs) != 1:
is_prelu = false
continue
if len(in_nodes2[1].outputs) != 1:
is_prelu = False
continue
if len(in_nodes3[0].outputs) != 1:
is_prelu = False
continue
if len(in_nodes3[1].outputs) != 1:
is_prelu = false
continue
if len(in_nodes4[1].outputs) != 1:
is_prelu = False
continue
mode = None
in_shape = in_nodes1.out_shapes[0]
if in_shape == list(in_nodes3[0].value.shape):
mode = "element"
elif len(in_nodes3[0].value.shape) == 0:
mode = "all"
elif len(in_nodes3[0].value.shape
) == 1 and in_nodes3[0].value.shape[0] == 1:
mode = "all"
elif len(in_shape) == 4 and len(
in_nodes3[0].value.shape
) == 1 and in_nodes3[0].value.shape[0] == in_shape[-1]:
mode = "channel"
weight = self.op_mapper.weights[in_nodes3[0].layer_name]
weight = numpy.expand_dims(weight, 0)
weight = numpy.expand_dims(weight, 2)
weight = numpy.expand_dims(weight, 3)
self.op_mapper.weights[in_nodes3[0].layer_name] = weight
in_nodes3[0].fluid_code.layers[0].param_attr["shape"] = [
1, in_shape[-1], 1, 1
]
else:
is_prelu = False
continue
if is_prelu:
index = in_nodes1.outputs.index(in_nodes0[0].layer_name)
del in_nodes1.outputs[index]
index = in_nodes1.outputs.index(in_nodes3[1].layer_name)
del in_nodes1.outputs[index]
index = in_nodes1.outputs.index(in_nodes4[1].layer_name)
del in_nodes1.outputs[index]
node.layer_type = "Prelu"
node.inputs = [in_nodes1.layer_name]
node.outputs = node.outputs
act = node.fluid_code.layers[-1].param_attr.get("act", None)
node.fluid_code.clear()
attr = {
"mode": string(mode),
"param_attr": string(in_nodes3[0].layer_name)
}
node.fluid_code.add_layer(
"prelu",
inputs=in_nodes1.fluid_code.layers[-1].output,
output=node,
param_attr=attr)
del self.graph.node_map[in_nodes0[0].layer_name]
del self.graph.node_map[in_nodes0[1].layer_name]
del self.graph.node_map[in_nodes2[0].layer_name]
del self.graph.node_map[in_nodes2[1].layer_name]
del self.graph.node_map[in_nodes3[1].layer_name]
del self.graph.node_map[in_nodes4[1].layer_name]
...@@ -39,7 +39,7 @@ ...@@ -39,7 +39,7 @@
| ResNet50 | [code](https://github.com/soeaver/caffe-model/blob/master/cls/resnet/deploy_resnet50.prototxt) | | ResNet50 | [code](https://github.com/soeaver/caffe-model/blob/master/cls/resnet/deploy_resnet50.prototxt) |
| Unet | [code](https://github.com/jolibrain/deepdetect/blob/master/templates/caffe/unet/deploy.prototxt) | | Unet | [code](https://github.com/jolibrain/deepdetect/blob/master/templates/caffe/unet/deploy.prototxt) |
| VGGNet | [code](https://gist.github.com/ksimonyan/211839e770f7b538e2d8#file-vgg_ilsvrc_16_layers_deploy-prototxt) | | VGGNet | [code](https://gist.github.com/ksimonyan/211839e770f7b538e2d8#file-vgg_ilsvrc_16_layers_deploy-prototxt) |
| FaceDetection | [code](https://github.com/ShiqiYu/libfacedetection/blob/master/models/caffe/yufacedetectnet-open-v1.prototxt) |
...@@ -65,4 +65,3 @@ ...@@ -65,4 +65,3 @@
| mNASNet | [pytorch(personal practice)](https://github.com/rwightman/gen-efficientnet-pytorch) |9| | mNASNet | [pytorch(personal practice)](https://github.com/rwightman/gen-efficientnet-pytorch) |9|
| EfficientNet | [pytorch(personal practice)](https://github.com/rwightman/gen-efficientnet-pytorch) |9| | EfficientNet | [pytorch(personal practice)](https://github.com/rwightman/gen-efficientnet-pytorch) |9|
| SqueezeNet | [onnx official](https://s3.amazonaws.com/download.onnx/models/opset_9/squeezenet.tar.gz) |9| | SqueezeNet | [onnx official](https://s3.amazonaws.com/download.onnx/models/opset_9/squeezenet.tar.gz) |9|
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册