未验证 提交 36ee5842 编写于 作者: J Jason 提交者: GitHub

Merge pull request #299 from mamingjie-China/develop

add Fill Minimun DepthToSpace Floor, and fix bug in Conv2D
# X2Paddle支持OP列表
> 目前X2Paddle支持40+的TensorFlow OP,30+的Caffe Layer,覆盖了大部分CV分类模型常用的操作。我们在如下列表中给出了目前X2Paddle支持的全部OP。
> 目前X2Paddle支持50+的TensorFlow OP,30+的Caffe Layer,覆盖了大部分CV分类模型常用的操作。我们在如下列表中给出了目前X2Paddle支持的全部OP。
**注:** 目前,部分OP暂未支持,如您在转换过程中出现OP不支持的情况,可自行添加或反馈给我们。欢迎通过[ISSUE反馈](https://github.com/PaddlePaddle/X2Paddle/issues/new)的方式告知我们(模型名,代码实现或模型获取方式),我们会及时跟进:)
......@@ -20,7 +20,7 @@
| 41 | Cast | 42 | Split | 43 | Squeeze | 44 | ResizeNearestNeighbor |
| 45 | Softmax | 46 | Range | 47 | ConcatV2 | 48 | MirrorPad |
| 49 | Identity | 50 | GreaterEqual | 51 | StopGradient | 52 | Minimum |
| 53 | RadnomUniform | | | | | | |
| 53 | RadnomUniform | 54 | Fill | 55 | Floor | 56 | DepthToSpace |
## Caffe
......
......@@ -46,7 +46,8 @@ class TFOpMapperNHWC(OpMapper):
'Softplus': ['softplus'],
'LeakyRelu': ['leaky_relu', {
'alpha': 'alpha'
}]
}],
'Floor': ['floor']
}
elementwise_ops = {
'Add': 'elementwise_add',
......@@ -54,6 +55,7 @@ class TFOpMapperNHWC(OpMapper):
'RealDiv': 'elementwise_div',
'Sub': 'elementwise_sub',
'Maximum': 'elementwise_max',
'Minimum': 'elementwise_min',
'Mul': 'elementwise_mul',
'FloorDiv': 'elementwise_floordiv'
}
......@@ -202,6 +204,52 @@ class TFOpMapperNHWC(OpMapper):
node.fluid_code.add_layer(
"transpose", inputs=input, output=node, param_attr=attr)
def Fill(self, node):
dims = self.graph.get_node(node.layer.input[0], copy=True)
input_value = self.graph.get_node(node.layer.input[1], copy=True)
assert input_value.layer_type == "Const", "Value of fill OP should be Const"
self.add_omit_nodes(input_value.layer_name, node.layer_name)
input_value = input_value.value
input_dtype = string(input_value.dtype)
attr = {'value': input_value, 'dtype': input_dtype}
node.fluid_code.add_layer(
"fill_constant", inputs=dims, output=node, param_attr=attr)
def DepthToSpace(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True)
block_size = node.get_attr("block_size")
data_format = node.get_attr("data_format").decode()
if data_format == "NHWC":
attr = {"perm": [0, 3, 1, 2]}
node.fluid_code.add_layer(
"transpose", inputs=input, output=input, param_attr=attr)
n, h, w, c = input.out_shapes[0]
attr = {'shape': [0, block_size * block_size, -1, h, w]}
node.fluid_code.add_layer(
"reshape", inputs=input, output=input, param_attr=attr)
attr = {'perm': [0, 2, 1, 3, 4]}
node.fluid_code.add_layer(
"transpose", inputs=input, output=input, param_attr=attr)
attr = {'shape': [0, c, h, w]}
node.fluid_code.add_layer(
"reshape", inputs=input, output=input, param_attr=attr)
attr = {'upscale_factor': block_size}
node.fluid_code.add_layer(
"pixel_shuffle", inputs=input, output=node, param_attr=attr)
if data_format == "NHWC":
attr = {"perm": [0, 2, 3, 1]}
node.fluid_code.add_layer(
"transpose", inputs=node, output=node, param_attr=attr)
def MaxPool(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True)
......@@ -236,6 +284,7 @@ class TFOpMapperNHWC(OpMapper):
def Conv2D(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True)
kernel = self.graph.get_node(node.layer.input[1], copy=True)
self.add_omit_nodes(kernel.layer_name, node.layer_name)
k_size = kernel.out_shapes[0]
strides = node.get_attr("strides")
......@@ -245,10 +294,17 @@ class TFOpMapperNHWC(OpMapper):
channel_first = data_format == "NCHW"
if kernel.layer_type == 'Const':
self.add_omit_nodes(kernel.layer_name, node.layer_name)
kernel_value = kernel.value
self.weights[kernel.layer_name.replace('/', '_')] = numpy.transpose(
kernel_value, (3, 2, 0, 1))
kernel_weight_name = kernel.layer_name.replace('/', '_')
else:
kernel_value = self.decoder.infer_tensor(kernel)
if kernel.layer_type == 'Split':
kernel_weight_name = "{}_{}_kernel".format(node.layer_name,
kernel.layer_name)
else:
kernel_weight_name = kernel.layer_name.replace('/', '_')
self.weights[kernel_weight_name] = numpy.transpose(kernel_value,
(3, 2, 0, 1))
if not channel_first:
strides = [strides[i] for i in [0, 3, 1, 2]]
......@@ -257,10 +313,9 @@ class TFOpMapperNHWC(OpMapper):
node.fluid_code.add_layer(
"transpose", inputs=input, output=node, param_attr=attr)
input = node
attr = {
"bias_attr": False,
"param_attr": string(kernel.layer_name),
"param_attr": string(kernel_weight_name),
"num_filters": k_size[3],
"filter_size": k_size[0:2],
"stride": strides[2:4],
......@@ -700,12 +755,15 @@ class TFOpMapperNHWC(OpMapper):
input = self.graph.get_node(node.layer.input[2], copy=True)
assert kernel.layer_type == "Const", "Kernel of Conv2DBackpropInput should be Const"
assert out_shape.layer_type == "Const", "Out_shape of Conv2DBackpropInput should be Const"
self.add_omit_nodes(kernel.layer_name, node.layer_name)
self.add_omit_nodes(out_shape.layer_name, node.layer_name)
if out_shape.layer_type == "Const":
out_shape = out_shape.value.tolist()
self.add_omit_nodes(out_shape.layer_name, node.layer_name)
else:
out_shape = self.decoder.infer_shape_tensor(out_shape,
node.out_shapes[0])
in_shape = input.out_shapes[0]
if in_shape.count(-1) > 2:
......
# X2Paddle模型测试库
> 目前X2Paddle支持40+的TensorFlow OP,40+的Caffe Layer,覆盖了大部分CV分类模型常用的操作。我们在如下模型列表中测试了X2Paddle的转换。
> 目前X2Paddle支持50+的TensorFlow OP,40+的Caffe Layer,覆盖了大部分CV分类模型常用的操作。我们在如下模型列表中测试了X2Paddle的转换。
**注:** 受限于不同框架的差异,部分模型可能会存在目前无法转换的情况,如TensorFlow中包含控制流的模型,NLP模型等。对于CV常见的模型,如若您发现无法转换或转换失败,存在较大diff等问题,欢迎通过[ISSUE反馈](https://github.com/PaddlePaddle/X2Paddle/issues/new)的方式告知我们(模型名,代码实现或模型获取方式),我们会及时跟进:)
......@@ -22,7 +22,8 @@
| UNet | [code1](https://github.com/jakeret/tf_unet )/[code2](https://github.com/lyatdawn/Unet-Tensorflow) |-|
|MTCNN | [code](https://github.com/AITTSMD/MTCNN-Tensorflow) |-|
|YOLO-V3| [code](https://github.com/YunYang1994/tensorflow-yolov3) | 转换需要关闭NHWC->NCHW的优化,见[文档Q2](FAQ.md) |
|Inception_ResNet_V2| [code](https://github.com/tensorflow/models/tree/master/research/slim/nets) | - |
| FALSR | [code](https://github.com/xiaomi-automl/FALSR) | - |
| DCSCN | [code](https://modelzoo.co/model/dcscn-super-resolution) | - |
## Caffe
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册