提交 e6e5dbb9 编写于 作者: J jiangjiajun

modify optimizer and fix conflicts

...@@ -5,29 +5,10 @@ A:该提示信息表示无法从TensorFlow的pb模型中获取到输入tensor( ...@@ -5,29 +5,10 @@ A:该提示信息表示无法从TensorFlow的pb模型中获取到输入tensor(
**Q2. TensorFlow模型转换失败怎么解决?** **Q2. TensorFlow模型转换失败怎么解决?**
A: 目前TensorFlow模型转换失败存在几个问题。1) 存在暂未支持的OP,此信息会在转换时输出; 2) NHWC优化导致部分参数出错;3)Batch维度带来的出错 4)其它 A: 如果并非是由缺少OP导致,那可能是由于TensorFlow模型转换时(NHWC->NCHW格式转换导致),在这种情况下,可以采用关闭格式优化,同时固化输入大小的方式,继续尝试转换,见如下命令,转换过程中,根据提示,输入相应tensor的固化shape大小
对于(1)问题,建议自行添加或发起Issue;
其中(2)、(3)、(4)问题目前没有明确的报错信息,当您遇到模型转换失败时,请尝试如下的步骤后,再进行转换测试
``` ```
x2paddle -f tensorflow -m tf.pb -s pd-model --without_data_format_optimization --define_input_shape x2paddle -f tensorflow -m tf.pb -s pd-model --without_data_format_optimization --define_input_shape
``` ```
#### without_data_format_optimization : 关闭NHWC优化 > 1. 目前Tensorflow的CV模型大部分均为`NHWC`的输入格式,而Paddle的默认输入格式为`NCHW`,因此X2Paddle在转换过程中,会对如`axis`, `shape`等参数进行转换,适应Paddle的NCHW格式。但在这种情况下,可能会由于TensorFlow模型太复杂,导致出错。
TensorFlow的CV模型,大多的输入格式为`NHWC`,而Paddle目前仅支持`NCHW`,如若直接转换,需要在conv2d、pool2d等操作前后添加transpose解决,这样会带来性能的损耗。X2Paddle在模型转换过程中,对此问题进行了优化,避免transpose操作带来的性能问题,但目前仅在部分模型上进行了测试,不一定适用于其它模型,因此,如若模型转换存在问题时,我们建议你关闭NHWC的优化。 > 2. X2Paddle默认情况,TensorFlow模型转换后得到的Paddle模型为`NCHW`的输入格式。但在指定`--withou_data_format_optimization`后,转换后的Paddle模型输入格式也同样为`NHWC`。
在模型转换时添加参数 --without_data_format_optimization
```
x2paddle -f tensorflow -m tf.pb -s pd-model --without_data_format_optimization
```
### define_input_shape : 固定Batch大小
受限于不同框架的运行机制,在转换过程中,Batch维度也有一定可能会带来模型转换失败的问题。可以尝试固定Batch维度后再转换
在模型转换时添加参数 --define_input_shape
```
x2paddle -f tensorflow -m tf.pb -s pd-model --define_input_shape
```
如原tensorflow模型的输入shape为`[None, 224, 224, 3]`,可添加参数后,根据提示,把输入的shape修改为`[2, 224, 224, 3]`
...@@ -4,24 +4,19 @@ ...@@ -4,24 +4,19 @@
X2Paddle支持将其余深度学习框架训练得到的模型,转换至PaddlePaddle模型。 X2Paddle支持将其余深度学习框架训练得到的模型,转换至PaddlePaddle模型。
X2Paddle is a toolkit for converting trained model to PaddlePaddle from other deep learning frameworks. X2Paddle is a toolkit for converting trained model to PaddlePaddle from other deep learning frameworks.
## 更新历史 ## 转换模型库
2019.08.05 X2Paddle在多个主流的CV模型上,测试过TensorFlow/Caffe/ONNX模型的转换,可以在[X2Paddle-Model-Zoo](x2paddle_model_zoo.md)查看我们的模型测试列表。如果你在新的模型上进行了测试转换,也欢迎继续补充该列表;如若无法转换,可通过ISSUE反馈给我们,我们会尽快跟进。
1. 统一tensorflow/caffe/onnx模型转换代码和对外接口
2. 解决上一版caffe2fluid无法转换多分支模型的问题
3. 解决Windows上保存模型无法加载的问题
4. 新增optimizer,优化代码结构,合并conv、batch_norm的bias和激活函数
**如果你需要之前版本的tensorflow2fluid/caffe2fluid/onnx2fluid,可以继续访问release-0.3分支,获取之前版本的代码使用。**
## 环境依赖 ## 环境依赖
python >= 3.5 python >= 3.5
paddlepaddle >= 1.5.0 paddlepaddle >= 1.5.0
**以下依赖只需对应安装自己需要的即可** **按需安装以下依赖**
转换tensorflow模型 : tensorflow == 1.14.0 tensorflow : tensorflow == 1.14.0
转换caffe模型 : caffe == 1.0.0 caffe : caffe == 1.0.0
转换onnx模型 : onnx == 1.5.0 pytorch == 1.1.0 onnx : onnx == 1.5.0 pytorch == 1.1.0
## 安装 ## 安装
### 安装方式一(推荐) ### 安装方式一(推荐)
使用最新的代码版本,可使用如下方式进行安装 使用最新的代码版本,可使用如下方式进行安装
...@@ -63,18 +58,36 @@ x2paddle --framework=onnx --model=onnx_model.onnx --save_dir=pd_model ...@@ -63,18 +58,36 @@ x2paddle --framework=onnx --model=onnx_model.onnx --save_dir=pd_model
|--prototxt | 当framework为caffe时,该参数指定caffe模型的proto文件路径 | |--prototxt | 当framework为caffe时,该参数指定caffe模型的proto文件路径 |
|--weight | 当framework为caffe时,该参数指定caffe模型的参数文件路径 | |--weight | 当framework为caffe时,该参数指定caffe模型的参数文件路径 |
|--save_dir | 指定转换后的模型保存目录路径 | |--save_dir | 指定转换后的模型保存目录路径 |
|--model | 当framework为tensorflow/pmmx时,该参数指定tensorflow的pb模型文件或onnx模型路径 | |--model | 当framework为tensorflow/onnx时,该参数指定tensorflow的pb模型文件或onnx模型路径 |
|--caffe_proto | [可选]由caffe.proto编译成caffe_pb2.py文件的存放路径,当存在自定义Layer时使用,默认为None | |--caffe_proto | **[可选]** 由caffe.proto编译成caffe_pb2.py文件的存放路径,当存在自定义Layer时使用,默认为None |
|--without_data_format_optimization | **[可选]** For TensorFlow, 当指定该参数时,关闭NHWC->NCHW的优化,见[文档Q2](FAQ.md) |
|--define_input_shape | **[可选]** For TensorFlow, 当指定该参数时,强制用户输入每个Placeholder的shape,见[文档Q2](FAQ.md) |
## 使用转换后的模型 ## 使用转换后的模型
转换后的模型包括`model_with_code``inference_model`两个目录。 转换后的模型包括`model_with_code``inference_model`两个目录。
`model_with_code`中保存了模型参数,和转换后的python模型代码 `model_with_code`中保存了模型参数,和转换后的python模型代码
`inference_model`中保存了序列化的模型结构和参数,可直接使用paddle的接口进行加载,见[load_inference_model](https://www.paddlepaddle.org.cn/documentation/docs/zh/1.5/api_guides/low_level/inference.html#api-guide-inference) `inference_model`中保存了序列化的模型结构和参数,可直接使用paddle的接口进行加载,见[load_inference_model](https://www.paddlepaddle.org.cn/documentation/docs/zh/1.5/api_guides/low_level/inference.html#api-guide-inference)
## 小工具
X2Paddle提供了工具解决如下问题,详见[tools/README.md](tools/README.md)
1. 检测模型是否在PaddleLite中支持
2. 合并模型参数文件
## 相关文档 ## 相关文档
1. [X2Paddle使用过程中常见问题](FAQ.md) 1. [X2Paddle使用过程中常见问题](FAQ.md)
2. [如何导出TensorFlow的pb模型](export_tf_model.md) 2. [如何导出TensorFlow的pb模型](export_tf_model.md)
3. [X2Paddle测试模型库](x2paddle_model_zoo.md) 3. [X2Paddle测试模型库](x2paddle_model_zoo.md)
4. [PyTorch模型导出为ONNX模型](pytorch_to_onnx.md)
## 更新历史
2019.08.05
1. 统一tensorflow/caffe/onnx模型转换代码和对外接口
2. 解决上一版caffe2fluid无法转换多分支模型的问题
3. 解决Windows上保存模型无法加载的问题
4. 新增optimizer,优化代码结构,合并conv、batch_norm的bias和激活函数
**如果你需要之前版本的tensorflow2fluid/caffe2fluid/onnx2fluid,可以继续访问release-0.3分支,获取之前版本的代码使用。**
## Acknowledgements ## Acknowledgements
......
## PyTorch模型导出为ONNX模型
目前onnx2paddle主要支持onnx operator version 9。 用户可通过如下示例代码,将torchvison或者自己开发写的模型转换成onnx model:
```
#coding: utf-8
import torch
import torchvision
# 指定输入大小的shape
dummy_input = torch.randn(1, 3, 224, 224)
# 构建pytorch model,并载入模型参数
resnet18 = torchvision.models.resnet18(pretrained=True)
# 导出resnet18.onnx模型文件
torch.onnx.export(resnet18, dummy_input, "resnet18.onnx",verbose=True)
```
### 一、PaddleLite部署
使用X2Paddle转换后的模型均可以使用Paddle Fluid进行预测。但对于PaddleLite上的部署,则需要检查模型中的OP是否都在PaddleLite中支持。使用`check_for_lite.py`可以进行检查。
```
python tools/check_for_lite.py paddle_model/inference_model/__model__
```
### 二、模型参数合并
X2Paddle转换后产出的路径下包括两个目录,
1. `model_with_code`: 包含保存的参数文件和模型python代码文件,供用户debug
2. `inference_model`: 参数文件和序列化的模型结构文件,供用户预测部署
其中在`inference_model`中,X2Paddle将每个参数独立保存在不同的文件中(文件名和参数名一致),用户可使用`merge_params.py`将参数文件合并成一个文件使用
```
python tools/merge_params.py paddle_model/inference_model new_model_dir
```
合并参数后的模型保存在`new_model_dir`
__version__ = "0.4.3" __version__ = "0.4.5"
...@@ -143,14 +143,17 @@ def onnx2paddle(model_path, save_dir): ...@@ -143,14 +143,17 @@ def onnx2paddle(model_path, save_dir):
except: except:
print("onnx is not installed, use \"pip install onnx==1.5.0\".") print("onnx is not installed, use \"pip install onnx==1.5.0\".")
return return
print("Now translating model from onnx to paddle.")
from x2paddle.decoder.onnx_decoder import ONNXDecoder from x2paddle.decoder.onnx_decoder import ONNXDecoder
from x2paddle.op_mapper.onnx_op_mapper import ONNXOpMapper
from x2paddle.optimizer.onnx_optimizer import ONNXOptimizer
print("Now translating model from onnx to paddle.")
model = ONNXDecoder(model_path) model = ONNXDecoder(model_path)
from x2paddle.op_mapper.onnx_op_mapper import ONNXOpMapper
mapper = ONNXOpMapper(model) mapper = ONNXOpMapper(model)
from x2paddle.optimizer.onnx_optimizer import ONNXOptimizer
optimizer = ONNXOptimizer(mapper) optimizer = ONNXOptimizer(mapper)
optimizer.delete_redundance_code() optimizer.delete_redundance_code()
mapper.save_inference_model(save_dir) mapper.save_inference_model(save_dir)
......
此差异已折叠。
此差异已折叠。
...@@ -23,6 +23,7 @@ from onnx.helper import get_attribute_value, make_attribute ...@@ -23,6 +23,7 @@ from onnx.helper import get_attribute_value, make_attribute
from onnx.shape_inference import infer_shapes from onnx.shape_inference import infer_shapes
from onnx.mapping import TENSOR_TYPE_TO_NP_TYPE from onnx.mapping import TENSOR_TYPE_TO_NP_TYPE
from onnx.numpy_helper import to_array from onnx.numpy_helper import to_array
from onnx import AttributeProto, TensorProto, GraphProto
from collections import OrderedDict as Dict from collections import OrderedDict as Dict
import onnx import onnx
import numpy as np import numpy as np
...@@ -44,7 +45,7 @@ class ONNXGraphNode(GraphNode): ...@@ -44,7 +45,7 @@ class ONNXGraphNode(GraphNode):
self.attr_map = self.get_attr_map() self.attr_map = self.get_attr_map()
self.dtype_map = {1: "float32", 3: "int32", 9: "int64"} self.dtype_map = {1: "float32", 3: "int32", 9: "int64"}
self.weight_inputs = list() self.weight_inputs = list()
self.out_shapes = None self.out_shapes = list()
self.dtype = None self.dtype = None
def get_attr_map(self): def get_attr_map(self):
...@@ -58,11 +59,10 @@ class ONNXGraphNode(GraphNode): ...@@ -58,11 +59,10 @@ class ONNXGraphNode(GraphNode):
@property @property
def value(self): def value(self):
assert 'Constant' in self.layer_type, "Only Constant node has value." assert 'Constant' in self.layer_type, "Only Constant | ConstantOfShape node has value."
attr = self.layer.attribute['value']
attr = self.layer.attr['value'] if 'value' not in self.attr_map:
if 'value' in self.attr_map: return None
return default
return self.attr_map[name] return self.attr_map[name]
def get_attribute_value2(self, attr): def get_attribute_value2(self, attr):
...@@ -110,23 +110,26 @@ class ONNXGraphDataNode(GraphNode): ...@@ -110,23 +110,26 @@ class ONNXGraphDataNode(GraphNode):
def out_shapes(self): def out_shapes(self):
values = self.layer.type.tensor_type.shape.dim values = self.layer.type.tensor_type.shape.dim
out_shapes = list() out_shapes = list()
out_shapes = [dim.dim_value for dim in values] out_shapes.append([dim.dim_value for dim in values])
return out_shapes return out_shapes
@property @property
def dtype(self): def dtype(self):
dtype = self.layer.type.tensor_type.elem_type dtype = self.layer.type.tensor_type.elem_type
return TENSOR_TYPE_TO_NP_TYPE[dtype] return TENSOR_TYPE_TO_NP_TYPE[dtype]
class ONNXGraph(Graph): class ONNXGraph(Graph):
def __init__(self, model): def __init__(self, graph, onnx_model):
super(ONNXGraph, self).__init__(model) super(ONNXGraph, self).__init__(graph)
self.onnx_model = onnx_model
self.initializer = {} self.initializer = {}
self.place_holder_nodes = list() self.place_holder_nodes = list()
self.get_place_holder_nodes() self.get_place_holder_nodes()
self.value_infos = self.inferred_model_value_info(graph)
self.results_of_inference = dict()
def get_inner_nodes(self): def get_inner_nodes(self):
""" """
generate inner node of ONNX model generate inner node of ONNX model
...@@ -162,17 +165,22 @@ class ONNXGraph(Graph): ...@@ -162,17 +165,22 @@ class ONNXGraph(Graph):
""" """
build topo_sort of ONNX model build topo_sort of ONNX model
""" """
data_node = self.place_holder_nodes[0]
value_info = self.value_infos[data_node]
input_shape = value_info['shape']
self.get_results_of_inference(self.onnx_model, input_shape)
for layer in self.model.node: for layer in self.model.node:
self.node_map[layer.name] = ONNXGraphNode(layer) node = ONNXGraphNode(layer)
self.node_map[layer.name] = node
#set op node's dtype and out_shapes for opt in layer.output:
for item in self.model.value_info: if opt in self.value_infos:
if item.name in self.node_map: value_info = self.value_infos[opt]
self.node_map[item.name].dtype = TENSOR_TYPE_TO_NP_TYPE[ node.dtype = value_info['dtype']
item.type.tensor_type.elem_type] node.out_shapes.append(value_info['shape'])
self.node_map[item.name].out_shapes = [ else:
dim.dim_value for dim in item.type.tensor_type.shape.dim _, dtype, shape = self.get_dynamic_shape(opt)
] node.dtype = dtype
node.out_shapes.append(shape)
for layer in self.model.input: for layer in self.model.input:
if layer.name not in self.node_map: if layer.name not in self.node_map:
...@@ -199,7 +207,6 @@ class ONNXGraph(Graph): ...@@ -199,7 +207,6 @@ class ONNXGraph(Graph):
format(in_node, layer_name)) format(in_node, layer_name))
else: else:
self.connect(in_node, layer_name) self.connect(in_node, layer_name)
#generate topo #generate topo
super(ONNXGraph, self).build() super(ONNXGraph, self).build()
...@@ -227,31 +234,108 @@ class ONNXGraph(Graph): ...@@ -227,31 +234,108 @@ class ONNXGraph(Graph):
weight = to_array(initializer) weight = to_array(initializer)
yield name, weight yield name, weight
def inferred_model_value_info(self, graph):
"""
collect value/type info for an ONNX model
"""
assert isinstance(graph,
onnx.GraphProto), 'model is not a ModelProto instance'
value_info = Dict()
for item in graph.value_info:
value_info[item.name] = {
'dtype':
TENSOR_TYPE_TO_NP_TYPE[item.type.tensor_type.elem_type],
'shape':
[dim.dim_value for dim in item.type.tensor_type.shape.dim],
'external': False
}
for item in graph.input:
assert item.name not in value_info
value_info[item.name] = {
'dtype':
TENSOR_TYPE_TO_NP_TYPE[item.type.tensor_type.elem_type],
'shape':
[dim.dim_value for dim in item.type.tensor_type.shape.dim],
'external': True
}
for item in graph.output:
assert item.name not in value_info
value_info[item.name] = {
'dtype':
TENSOR_TYPE_TO_NP_TYPE[item.type.tensor_type.elem_type],
'shape':
[dim.dim_value for dim in item.type.tensor_type.shape.dim],
'external': True
}
return value_info
def get_results_of_inference(self, model, shape):
try:
import torch
version = torch.__version__
if '1.1.0' not in version:
print("your model have dynamic graph, torch==1.1.0 is required")
return
except:
print(
"your model have dynamic graph, we use caff2 to inference graph, please use \"pip install torch==1.1.0\"."
)
return
from x2paddle.decoder.onnx_backend import prepare
np_images = np.random.rand(shape[0], shape[1], shape[2],
shape[3]).astype('float32')
outputs = []
for node in model.graph.node:
value_info = helper.make_tensor_value_info(node.name,
TensorProto.UNDEFINED,
[])
outputs.append(value_info)
while len(outputs) > 0:
tmp_outputs = outputs[:254]
model.graph.ClearField('output')
model.graph.output.MergeFrom(tmp_outputs)
prepared_backend = prepare(model,
device='CPU',
no_check_UNSAFE=True)
res = prepared_backend.run(inputs=np_images)
for idx, info in enumerate(tmp_outputs):
self.results_of_inference[info.name] = res[idx]
outputs = outputs[254:]
return
def get_dynamic_shape(self, layer):
"""
get dynamic shape from caffe2.backend
"""
output = self.results_of_inference[layer]
return output.tolist(), output.dtype, output.shape
class ONNXDecoder(object): class ONNXDecoder(object):
def __init__(self, onnx_model): def __init__(self, onnx_model):
model = onnx.load(onnx_model) model = onnx.load(onnx_model)
print('model ir_version: {}, op version: {}'.format( print('model ir_version: {}, op version: {}'.format(
model.ir_version, model.opset_import[0].version)) model.ir_version, model.opset_import[0].version))
if model.opset_import[0].version < 9: if model.opset_import[0].version < 9:
_logger.warning( _logger.warning(
'Now, onnx2paddle main support convert onnx model opset_verison == 9,' 'Now, onnx2paddle main support convert onnx model opset_verison == 9,'
'opset_verison of your onnx model is %d < 9,' 'opset_verison of your onnx model is %d < 9,'
'some operator may cannot convert.', 'some operator may cannot convert.',
model.opset_import[0].version) model.opset_import[0].version)
check_model(model)
model = polish_model(model)
check_model(model)
model = onnx.shape_inference.infer_shapes(model)
model = self.optimize_model_skip_op_for_inference(model) model = self.optimize_model_skip_op_for_inference(model)
model = self.optimize_model_strip_initializer(model) model = self.optimize_model_strip_initializer(model)
self.standardize_variable_name(model.graph) self.standardize_variable_name(model.graph)
self.model = model self.model = model
graph_def = model.graph graph_def = model.graph
self.onnx_graph = ONNXGraph(graph_def, model)
self.onnx_graph = ONNXGraph(graph_def)
self.onnx_graph.build() self.onnx_graph.build()
def build_value_refs(self, nodes): def build_value_refs(self, nodes):
...@@ -334,9 +418,13 @@ class ONNXDecoder(object): ...@@ -334,9 +418,13 @@ class ONNXDecoder(object):
output_name, output_refs) output_name, output_refs)
else: else:
processed = -1 processed = -1
if processed > 0: if processed > 0:
nodes_to_remove.append(node_idx) nodes_to_remove.append(node_idx)
for value_info in ret.graph.value_info:
for output in node.output:
if value_info.name == output:
ret.graph.value_info.remove(value_info)
print('skip op {}: {} -> {} -> {}'.format( print('skip op {}: {} -> {} -> {}'.format(
node_idx, input_name, node.op_type, output_name)) node_idx, input_name, node.op_type, output_name))
elif processed == 0: elif processed == 0:
...@@ -396,7 +484,6 @@ class ONNXDecoder(object): ...@@ -396,7 +484,6 @@ class ONNXDecoder(object):
""" """
standardize variable name for paddle's code standardize variable name for paddle's code
""" """
for initializer in graph.initializer: for initializer in graph.initializer:
initializer.name = self.make_variable_name(initializer.name) initializer.name = self.make_variable_name(initializer.name)
for ipt in graph.input: for ipt in graph.input:
...@@ -455,43 +542,3 @@ class ONNXDecoder(object): ...@@ -455,43 +542,3 @@ class ONNXDecoder(object):
raise RuntimeError("Input mismatch {} != {}".format( raise RuntimeError("Input mismatch {} != {}".format(
len(onnx_model.input), len(model.input))) len(onnx_model.input), len(model.input)))
return onnx_model return onnx_model
def get_dynamic_shape_from_caffe2(self, layer, input_shapes):
"""
get dynamic shape from caffe2.backend
"""
try:
import torch
version = torch.__version__
if '1.1.0' not in version:
print("your model have dynamic graph, torch==1.1.0 is required")
return
except:
print(
"your model have dynamic graph, we use caff2 to inference graph, please use \"pip install torch==1.1.0\"."
)
return
from caffe2.python.onnx.backend import prepare
shape = input_shapes[0]
np_images = np.random.rand(shape[0], shape[1], shape[2],
shape[3]).astype('float32')
num_onnx = self.split_model(self.model, layer)
prepared_backend = prepare(num_onnx, device='CPU')
output = prepared_backend.run(inputs=np_images)
return output[0].tolist()
def get_dynamic_shape_from_onnx(self, layer, input_shapes):
"""
get dynamic shape from onnxruntime
"""
import onnxruntime as rt
from onnxruntime.backend import prepare
import numpy as np
num_onnx = self.split_model(self.model, layer)
sess = prepare(num_onnx)
shape = input_shapes[0]
print(shape)
np_images = np.random.rand(shape[0], shape[1], shape[2],
shape[3]).astype('float32')
output = sess.run(model=sess, inputs=np_images)
return output[0].tolist()
...@@ -176,7 +176,7 @@ class TFGraph(Graph): ...@@ -176,7 +176,7 @@ class TFGraph(Graph):
def _remove_identity_node(self): def _remove_identity_node(self):
identity_node = list() identity_node = list()
for node_name, node in self.node_map.items(): for node_name, node in self.node_map.items():
if node.layer_type == "Identity": if node.layer_type == "Identity" or node.layer_type == "StopGradient":
identity_node.append(node_name) identity_node.append(node_name)
for node_name in identity_node: for node_name in identity_node:
...@@ -374,3 +374,38 @@ class TFDecoder(object): ...@@ -374,3 +374,38 @@ class TFDecoder(object):
return results[0].tolist() return results[0].tolist()
else: else:
raise Exception("Couldn't infer a stable shape shape tensor value") raise Exception("Couldn't infer a stable shape shape tensor value")
def infer_tensor_shape(self, graph_node):
if hasattr(graph_node, "index"):
tensor_name = graph_node.layer.name + ":{}".format(graph_node.index)
else:
tensor_name = graph_node.layer.name + ":0"
feed = dict()
batch_size = [2, 3, 5]
shapes = list()
for b in batch_size:
for input_name, info in self.input_info.items():
(shape, dtype) = cp.deepcopy(info)
input_tensor = self.sess.graph.get_tensor_by_name(input_name +
":0")
if shape.count(-1) > 0:
shape[shape.index(-1)] = b
feed[input_tensor] = numpy.random.random_sample(shape)
output_tensor = self.sess.graph.get_tensor_by_name(tensor_name)
shape = self.sess.run([output_tensor], feed)[0].shape
shapes.append(numpy.array(shape))
compare01 = (shapes[0] == shapes[1])
compare12 = (shapes[1] == shapes[2])
if compare01.all() and compare12.all():
return shape[0].tolist()
if (compare01 == compare12).all():
index = numpy.argwhere(compare01 == False).flatten()
if index.shape[0] != 1:
raise Exception("There's not only one unstable dimension")
if index[0] != 0:
raise Exception("Batch size not in the first dimension")
shapes[0][0] = -1
return shapes[0].tolist()
...@@ -308,7 +308,7 @@ class CaffeOpMapper(OpMapper): ...@@ -308,7 +308,7 @@ class CaffeOpMapper(OpMapper):
'pool_padding': pad, 'pool_padding': pad,
'ceil_mode': ceil_mode, 'ceil_mode': ceil_mode,
'pool_type': string(pool_type), 'pool_type': string(pool_type),
'exclusive': True, 'exclusive': False,
'global_pooling': global_pool, 'global_pooling': global_pool,
'name': string(node.layer_name) 'name': string(node.layer_name)
} }
......
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .register import register
def InstanceNormalization_shape(input_shape):
return input_shape
def InstanceNormalization_layer(inputs, name=None):
# TODO(lvmengsi@baidu.com): Check the accuracy when using fluid.layers.layer_norm.
epsilon = 1e-5
mean = fluid.layers.reduce_mean(inputs, dim=[2, 3], keep_dim=True)
var = fluid.layers.reduce_mean(fluid.layers.square(inputs - mean),
dim=[2, 3],
keep_dim=True)
if name is not None:
scale_name = name + "_scale"
offset_name = name + "_offset"
scale_param = fluid.ParamAttr(name=scale_name,
initializer=fluid.initializer.Constant(1.0),
trainable=True)
offset_param = fluid.ParamAttr(name=offset_name,
initializer=fluid.initializer.Constant(0.0),
trainable=True)
scale = fluid.layers.create_parameter(attr=scale_param,
shape=inputs.shape[1:2],
dtype="float32")
offset = fluid.layers.create_parameter(attr=offset_param,
shape=inputs.shape[1:2],
dtype="float32")
tmp = fluid.layers.elementwise_mul(x=(inputs - mean), y=scale, axis=1)
tmp = tmp / fluid.layers.sqrt(var + epsilon)
tmp = fluid.layers.elementwise_add(tmp, offset, axis=1)
return tmp
def InstanceNormalization_weights(name, data=None):
weights_name = [name + '_scale']
return weights_name
register(kind='InstanceNormalization',
shape=InstanceNormalization_shape,
layer=InstanceNormalization_layer,
weights=InstanceNormalization_weights)
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .register import get_registered_layers
#custom layer import begins
from . import InstanceNormalization
#custom layer import ends
custom_layers = get_registered_layers()
def set_args(f, params):
""" set args for function 'f' using the parameters in node.layer.param
Args:
f (function): a python function object
params (object): a object contains attributes needed by f's arguments
Returns:
arg_names (list): a list of argument names
kwargs (dict): a dict contains needed arguments
"""
argc = f.__code__.co_argcount
arg_list = f.__code__.co_varnames[0:argc]
kwargs = {}
for arg_name in arg_list:
if hasattr(params, arg_name) and params is not None:
kwargs[arg_name] = getattr(params, arg_name)
return arg_list, kwargs
def has_layer(layer_type):
""" test whether this layer exists in custom layer
"""
return layer_type in custom_layers
def get_params(layer, layer_type):
import re
if layer_type.lower() == "deconvolution" or layer_type.lower(
) == "convolutiondepthwise":
param_name = '_'.join(('convolution', 'param'))
elif layer_type.lower() == "normalize":
param_name = '_'.join(('norm', 'param'))
elif len(layer_type) - len(re.sub("[A-Z]", "", layer_type)) >= 2:
s = ''
tmp_name = ''
for i, ch in enumerate(layer_type):
if i == 0:
s += ch.lower()
continue
elif ch.isupper() and layer_type[i - 1].islower():
tmp_name += (s + '_')
s = ''
s += ch.lower()
tmp_name += s
param_name = '_'.join((tmp_name, 'param'))
else:
param_name = '_'.join((layer_type.lower(), 'param'))
return getattr(layer, param_name, None)
def compute_output_shape(node):
""" compute the output shape of custom layer
"""
layer_type = node.layer_type
assert layer_type in custom_layers, "layer[%s] not exist in custom layers" % (
layer_type)
shape_func = custom_layers[layer_type]['shape']
layer = node.layer
params = get_params(layer, layer_type)
arg_names, kwargs = set_args(shape_func, params)
input_shape = node.input_shape
return shape_func(input_shape, **kwargs)
def make_custom_layer(node):
""" get the code which implement the custom layer function
"""
layer_type = node.layer_type
assert layer_type in custom_layers, "layer[%s] not exist in custom layers" % (
layer_type)
layer_func = custom_layers[layer_type]['layer']
import inspect
return inspect.getsource(layer_func), layer_func
def deal_weights(node, data=None):
""" deal the weights of the custom layer
"""
layer_type = node.layer_type
weights_func = custom_layers[layer_type]['weights']
name = node.layer_name
return weights_func(name, data)
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" this module provides 'register' for registering customized layers
"""
g_custom_layers = {}
def register(kind, shape, layer, weights):
""" register a custom layer or a list of custom layers
Args:
@kind (str or list): type name of the layer
@shape (function): a function to generate the shape of layer's output
@layer (function): a function to generate the paddle code of layer
@weights (function): a function to deal with weights data
Returns:
None
"""
assert type(shape).__name__ == 'function', 'shape should be a function'
assert type(layer).__name__ == 'function', 'layer should be a function'
if type(kind) is str:
kind = [kind]
else:
assert type(
kind
) is list, 'invalid param "kind" for register, not a list or str'
for k in kind:
assert type(
k) is str, 'invalid param "kind" for register, not a list of str'
assert k not in g_custom_layers, 'this type[%s] has already been registered' % (
k)
print('register layer[%s]' % (k))
g_custom_layers[k] = {
'shape': shape,
'layer': layer,
'weights': weights
}
def get_registered_layers():
return g_custom_layers
...@@ -24,6 +24,7 @@ default_op_mapping_field_values['DEFAULTS'] = dict() ...@@ -24,6 +24,7 @@ default_op_mapping_field_values['DEFAULTS'] = dict()
default_op_mapping_field_values['INPUT_PERM'] = None default_op_mapping_field_values['INPUT_PERM'] = None
default_op_mapping_field_values['OUTPUT_PERM'] = None default_op_mapping_field_values['OUTPUT_PERM'] = None
default_op_mapping_field_values['FILL_NAME_FIELD'] = True default_op_mapping_field_values['FILL_NAME_FIELD'] = True
default_op_mapping = { default_op_mapping = {
'Gather': ['gather', ['X'], ['Out'], 'Gather': ['gather', ['X'], ['Out'],
dict(axis='')], dict(axis='')],
...@@ -46,8 +47,44 @@ default_op_mapping = { ...@@ -46,8 +47,44 @@ default_op_mapping = {
dict(axes='dim', keepdims='keep_dim'), dict(axes='dim', keepdims='keep_dim'),
dict(keep_dim=1) dict(keep_dim=1)
], ],
'ReduceSum': [
'reduce_sum', ['X'], ['Out'],
dict(axes='dim', keepdims='keep_dim'),
dict(keep_dim=1)
],
#active function
'Relu': ['relu', ['X'], ['Out']],
'LeakyRelu': ['leaky_relu', ['X'], ['Out'],
dict(), dict(alpha=.01)],
'Elu': ['elu', ['X'], ['Out'],
dict(), dict(alpha=1.)],
'ThresholdedRelu': [
'thresholded_relu', ['X'], ['Out'],
dict(alpha='threshold'),
dict(alpha=1.)
],
'Tanh': ['tanh', ['X'], ['Out']],
'Sigmoid': ['sigmoid', ['X'], ['Out']],
'Pow': ['elementwise_pow', ['X', 'Y'], ['Out'],
dict(),
dict(axis=-1)], # TODO: pow for scalar exponent
'HardSigmoid': [
'hard_sigmoid', ['X'], ['Out'],
dict(alpha='slope', beta='offset'),
dict(slope=.2, offset=.5)
],
'Softsign': ['softsign', ['X'], ['Out']],
'Softplus': ['softplus', ['X'], ['Out']],
'Exp': ['exp', ['X'], ['Out']],
'Softmax': ['softmax', ['X'], ['Out'],
dict(axis=''),
dict(axis=1)],
}
activefunc_op_mapping = {
'LeakyRelu': ['leaky_relu', ['X'], ['Out'], 'LeakyRelu': ['leaky_relu', ['X'], ['Out'],
dict(), dict(alpha=.01)] dict(), dict(alpha=.01)],
} }
default_ioa_constraint = { default_ioa_constraint = {
......
...@@ -58,6 +58,7 @@ class TFOpMapper(OpMapper): ...@@ -58,6 +58,7 @@ class TFOpMapper(OpMapper):
'Exp': ['exp'], 'Exp': ['exp'],
'Rsqrt': ['rsqrt'], 'Rsqrt': ['rsqrt'],
'swish_f32': ['swish'], 'swish_f32': ['swish'],
'Tanh': ['tanh'],
'LeakyRelu': ['leaky_relu', { 'LeakyRelu': ['leaky_relu', {
'alpha': 'alpha' 'alpha': 'alpha'
}] }]
...@@ -188,6 +189,10 @@ class TFOpMapper(OpMapper): ...@@ -188,6 +189,10 @@ class TFOpMapper(OpMapper):
if y_shape[index] != x_shape[index]: if y_shape[index] != x_shape[index]:
is_sub_seq = False is_sub_seq = False
if not is_sub_seq: if not is_sub_seq:
if x_shape.count(-1) > 2:
x_shape = self.decoder.infer_tensor_shape(x_input)
if y_shape.count(-1) > 2:
y_shape = self.decoder.infer_tensor_shape(y_input)
x_expand_times = [1] * len(x_shape) x_expand_times = [1] * len(x_shape)
y_expand_times = [1] * len(y_shape) y_expand_times = [1] * len(y_shape)
x_need_expand = False x_need_expand = False
...@@ -913,6 +918,12 @@ class TFOpMapper(OpMapper): ...@@ -913,6 +918,12 @@ class TFOpMapper(OpMapper):
self.add_omit_nodes(kernel.layer_name, node.layer_name) self.add_omit_nodes(kernel.layer_name, node.layer_name)
self.add_omit_nodes(out_shape.layer_name, node.layer_name) self.add_omit_nodes(out_shape.layer_name, node.layer_name)
if out_shape.layer_type == "Const":
out_shape = out_shape.value.tolist()
else:
out_shape = self.decoder.infer_shape_tensor(out_shape,
node.out_shapes[0])
in_shape = input.out_shapes[0] in_shape = input.out_shapes[0]
if in_shape.count(-1) > 2: if in_shape.count(-1) > 2:
in_shape = self.decoder.infer_tensor(input).shape in_shape = self.decoder.infer_tensor(input).shape
...@@ -920,7 +931,7 @@ class TFOpMapper(OpMapper): ...@@ -920,7 +931,7 @@ class TFOpMapper(OpMapper):
if k_size.count(-1) > 2: if k_size.count(-1) > 2:
k_size = self.decoder.infer_tensor(kernel).shape k_size = self.decoder.infer_tensor(kernel).shape
pad_mode = node.get_attr("padding") pad_mode = node.get_attr("padding").decode()
strides = node.get_attr("strides") strides = node.get_attr("strides")
dilations = node.get_attr("dilations") dilations = node.get_attr("dilations")
data_format = node.get_attr("data_format").decode() data_format = node.get_attr("data_format").decode()
...@@ -963,6 +974,22 @@ class TFOpMapper(OpMapper): ...@@ -963,6 +974,22 @@ class TFOpMapper(OpMapper):
output=node, output=node,
param_attr=attr) param_attr=attr)
if pad_mode == "SAME":
if node.tf_data_format == "NHWC":
out_shape = [out_shape[i] for i in [0, 3, 1, 2]]
for i in range(4):
if out_shape[i] < 0:
out_shape[i] = 999999
attr = {
"axes": [0, 1, 2, 3],
"starts": [0, 0, 0, 0],
"ends": out_shape
}
node.fluid_code.add_layer("slice",
inputs=node,
output=node,
param_attr=attr)
def Max(self, node): def Max(self, node):
input = self.graph.get_node(node.layer.input[0], copy=True) input = self.graph.get_node(node.layer.input[0], copy=True)
reduce_idx = self.graph.get_node(node.layer.input[1], copy=True) reduce_idx = self.graph.get_node(node.layer.input[1], copy=True)
...@@ -1173,3 +1200,17 @@ class TFOpMapper(OpMapper): ...@@ -1173,3 +1200,17 @@ class TFOpMapper(OpMapper):
inputs=None, inputs=None,
output=node, output=node,
param_attr=attr) param_attr=attr)
def SquaredDifference(self, node):
x = self.graph.get_node(node.layer.input[0], copy=True)
y = self.graph.get_node(node.layer.input[1], copy=True)
inputs = {"x": x, "y": y}
node.fluid_code.add_layer("elementwise_sub",
inputs=inputs,
output=node,
param_attr=None)
inputs = {"x": node, "y": node}
node.fluid_code.add_layer("elementwise_mul",
inputs=inputs,
output=node,
param_attr=None)
...@@ -14,7 +14,6 @@ ...@@ -14,7 +14,6 @@
# TODO useless node remove # TODO useless node remove
from x2paddle.op_mapper.onnx_op_mapper import ONNXOpMapper from x2paddle.op_mapper.onnx_op_mapper import ONNXOpMapper
from x2paddle.core.util import *
class ONNXOptimizer(object): class ONNXOptimizer(object):
......
目前X2Paddle支持40+的TensorFlow OP,40+的Caffe Layer,覆盖了大部分CV分类模型常用的操作。我们在如下模型列表中测试了X2Paddle的转换。 # X2Paddle模型测试库
> 目前X2Paddle支持40+的TensorFlow OP,40+的Caffe Layer,覆盖了大部分CV分类模型常用的操作。我们在如下模型列表中测试了X2Paddle的转换。
受限于不同框架的差异,部分模型可能会存在目前无法转换的情况,如TensorFlow中包含控制流的模型,NLP模型等。对于CV常见的模型,如若您发现无法转换或转换失败,存在较大diff等问题,欢迎通过[ISSUE反馈](https://github.com/PaddlePaddle/X2Paddle/issues/new)的方式告知我们(模型名,代码实现或模型获取方式),我们会即时跟进:)
**注:** 受限于不同框架的差异,部分模型可能会存在目前无法转换的情况,如TensorFlow中包含控制流的模型,NLP模型等。对于CV常见的模型,如若您发现无法转换或转换失败,存在较大diff等问题,欢迎通过[ISSUE反馈](https://github.com/PaddlePaddle/X2Paddle/issues/new)的方式告知我们(模型名,代码实现或模型获取方式),我们会及时跟进:)
# TensorFlow
## TensorFlow
| 模型 | 代码 |
|------|----------| | 模型 | 代码 | 备注 |
| SqueezeNet | [code](https://github.com/tensorflow/tpu/blob/master/models/official/squeezenet/squeezenet_model.py)| |------|----------|------|
| MobileNet_V1 | [code](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md) | | SqueezeNet | [code](https://github.com/tensorflow/tpu/blob/master/models/official/squeezenet/squeezenet_model.py)|-|
| MobileNet_V2 | [code](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet) | | MobileNet_V1 | [code](https://github.com/tensorflow/models/tree/master/research/slim/nets) |-|
| ShuffleNet | [code](https://github.com/TropComplique/shufflenet-v2-tensorflow) | | MobileNet_V2 | [code](https://github.com/tensorflow/models/tree/master/research/slim/nets) |-|
| mNASNet | [code](https://github.com/tensorflow/tpu/tree/master/models/official/mnasnet) | | ShuffleNet | [code](https://github.com/TropComplique/shufflenet-v2-tensorflow) |-|
| EfficientNet | [code](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet) | | mNASNet | [code](https://github.com/tensorflow/tpu/tree/master/models/official/mnasnet) |-|
| Inception_V4 | [code](https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v4.py) | | EfficientNet | [code](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet) |-|
| Inception_ResNet_V2 | [code](https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_resnet_v2.py) | | Inception_V4 | [code](https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v4.py) |-|
| VGG16 | [code](https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py) | | Inception_ResNet_V2 | [code](https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_resnet_v2.py) |-|
| ResNet_V1_101 | [code](https://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_v1.py) | | VGG16 | [code](https://github.com/tensorflow/models/tree/master/research/slim/nets) |-|
| ResNet_V2_101 | [code](https://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_v2.py) | | ResNet_V1_101 | [code](https://github.com/tensorflow/models/tree/master/research/slim/nets) |-|
| ResNet_V2_101 | [code](https://github.com/tensorflow/models/tree/master/research/slim/nets) |-|
# Caffe | UNet | [code1](https://github.com/jakeret/tf_unet )/[code2](https://github.com/lyatdawn/Unet-Tensorflow) |-|
|MTCNN | [code](https://github.com/AITTSMD/MTCNN-Tensorflow) |-|
|YOLO-V3| [code](https://github.com/YunYang1994/tensorflow-yolov3) | 转换需要关闭NHWC->NCHW的优化,见[文档Q2](FAQ.md) |
|Inception_V4| [code](https://github.com/tensorflow/models/tree/master/research/slim/nets) | - |
|Inception_ResNet_V2| [code](https://github.com/tensorflow/models/tree/master/research/slim/nets) | - |
## Caffe
| 模型 | 代码 | | 模型 | 代码 |
|-------|--------| |-------|--------|
...@@ -29,36 +35,24 @@ ...@@ -29,36 +35,24 @@
| mNASNet | [code](https://github.com/LiJianfei06/MnasNet-caffe) | | mNASNet | [code](https://github.com/LiJianfei06/MnasNet-caffe) |
| MTCNN | [code](https://github.com/kpzhang93/MTCNN_face_detection_alignment/tree/master/code/codes/MTCNNv1/model) | | MTCNN | [code](https://github.com/kpzhang93/MTCNN_face_detection_alignment/tree/master/code/codes/MTCNNv1/model) |
# ONNX ## ONNX
**注:** 部分模型来源于PyTorch,PyTorch的转换可参考[pytorch_to_onnx.md](pytorch_to_onnx.md)
| 模型 | 来源 | operator version| | 模型 | 来源 | operator version|
|-------|--------|---------| |-------|--------|---------|
| Resnet18 | [torchvison.model.resnet18](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py) |9| | ResNet18 | [torchvison.model.resnet18](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py) |9|
| Resnet34 | [torchvison.model.resnet34](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py) |9| | ResNet34 | [torchvison.model.resnet34](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py) |9|
| Resnet50 | [torchvison.model.resnet50](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py) |9| | ResNet50 | [torchvison.model.resnet50](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py) |9|
| Resnet101 | [torchvison.model.resnet101](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py) |9| | ResNet101 | [torchvison.model.resnet101](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py) |9|
| Vgg11 | [torchvison.model.vgg11](https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py) |9| | VGG11 | [torchvison.model.vgg11](https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py) |9|
| Vgg11_bn | [torchvison.model.vgg11_bn](https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py) |9| | VGG11_bn | [torchvison.model.vgg11_bn](https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py) |9|
| Vgg19| [torchvison.model.vgg19](https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py) |9| | VGG19| [torchvison.model.vgg19](https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py) |9|
| Densenet121 | [torchvison.model.densenet121](https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py) |9| | DenseNet121 | [torchvison.model.densenet121](https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py) |9|
| Alexnet | [torchvison.model.alexnet](https://github.com/pytorch/vision/blob/master/torchvision/models/alexnet.py) |9| | AlexNet | [torchvison.model.alexnet](https://github.com/pytorch/vision/blob/master/torchvision/models/alexnet.py) |9|
| Shufflenet | [onnx official](https://github.com/onnx/models/tree/master/vision/classification/shufflenet) |9| | ShuffleNet | [onnx official](https://github.com/onnx/models/tree/master/vision/classification/shufflenet) |9|
| Inception_v2 | [onnx official](https://github.com/onnx/models/tree/master/vision/classification/inception_and_googlenet/inception_v2) |9| | Inception_V2 | [onnx official](https://github.com/onnx/models/tree/master/vision/classification/inception_and_googlenet/inception_v2) |9|
| Mobilenet_v2 | [pytorch(personal practice)](https://github.com/tonylins/pytorch-mobilenet-v2) |9| | MobileNet_V2 | [pytorch(personal practice)](https://github.com/tonylins/pytorch-mobilenet-v2) |9|
| mNASNet | [pytorch(personal practice)](https://github.com/rwightman/gen-efficientnet-pytorch) |9|
目前onnx2paddle主要支持onnx operator version 9; | EfficientNet | [pytorch(personal practice)](https://github.com/rwightman/gen-efficientnet-pytorch) |9|
如何将torchvison或者个人开发者写的pytroch model转换成onnx model: | SqueezeNet | [onnx official](https://s3.amazonaws.com/download.onnx/models/opset_9/squeezenet.tar.gz) |9|
```
import torch
import torchvision
#根据不同模型调整输入的shape
dummy_input = torch.randn(1, 3, 224, 224)
#预训练后的pytorch model
resnet18 = torchvision.models.resnet18(pretrained=True)
#"resnet18.onnx"为onnx model的存储路径,1.1
torch.onnx.export(resnet18, dummy_input, "resnet18.onnx",verbose=True)
```
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册