提交 56ee75d9 编写于 作者: S SunAhong1993

add lrn

上级 4db8d282
......@@ -42,7 +42,7 @@
| 21 | Axpy | 22 | ROIPolling | 23 | Permute | 24 | DetectionOutput |
| 25 | Normalize | 26 | Select | 27 | ShuffleChannel | 28 | ConvolutionDepthwise |
| 29 | ReLU | 30 | AbsVal | 31 | Sigmoid | 32 | TanH |
| 33 | ReLU6 | 34 | Upsample | | | | |
| 33 | ReLU6 | 34 | Upsample | 35 | MemoryData | | |
## ONNX
......@@ -63,7 +63,7 @@
| 49 | MaxPool | 50 | Conv | 51 | Gemm | 52 | NonZero |
| 53 | Abs | 54 | Floor | 56 | ArgMax | 57 | Sign |
| 58 | Reciprocal | 59 | Size | 60 | OneHot | 61 | ReduceProd |
| 62 | LogSoftmax | 63 | LSTM | | | | |
| 62 | LogSoftmax | 63 | LSTM | 64 | LRN | | |
......@@ -99,7 +99,7 @@ Aten:
| 101 | aten::upsample\_bilinear2d | 102 | aten::values |103|aten::view|104|aten::warn|
| 105 | aten::where | 106 | aten::zeros |107|aten::zeros\_like|108|aten::bmm|
| 109 | aten::sub\_ | 110 | aten:erf |111|aten::lstm|112|aten::gather|
| 113 | aten::upsample\_nearest2d || |||||
| 113 | aten::upsample\_nearest2d | 114 |aten::split\_with\_sizes |||||
Prim:
| 序号 | OP | 序号 | OP | 序号 | OP | 序号 | OP |
......
# X2Paddle模型测试库
> 目前X2Paddle支持70+的TensorFlow OP,40+的Caffe Layer,覆盖了大部分CV分类模型常用的操作。我们在如下模型列表中测试了X2Paddle的转换。
> 目前X2Paddle支持80+的TensorFlow OP,30+的Caffe Layer,60+的ONNX OP,110+的PyTorch Aten,10+的PyTorch Prim,覆盖了大部分CV分类模型常用的操作。我们在如下模型列表中测试了X2Paddle的转换。
**注:** 受限于不同框架的差异,部分模型可能会存在目前无法转换的情况,如TensorFlow中包含控制流的模型,NLP模型等。对于CV常见的模型,如若您发现无法转换或转换失败,存在较大diff等问题,欢迎通过[ISSUE反馈](https://github.com/PaddlePaddle/X2Paddle/issues/new)的方式告知我们(模型名,代码实现或模型获取方式),我们会及时跟进:)
......
......@@ -229,6 +229,11 @@ def main():
assert args.paddle_type in ["dygraph", "static"], "--paddle_type must be 'dygraph' or 'static'"
try:
import platform
v0, v1, v2 = platform.python_version().split('.')
if not(int(v0) >= 3 and int(v1) >= 5):
print("[ERROR] python>=3.5 is required")
return
import paddle
v0, v1, v2 = paddle.__version__.split('.')
print("paddle.__version__ = {}".format(paddle.__version__))
......
......@@ -184,7 +184,7 @@ class TFGraph(Graph):
node = super(TFGraph, self).get_node(new_node_name, copy)
if node is None:
return None
if node.layer_type == "Switch":
if node.layer_type not in ["Unpack", "Split"]:
if hasattr(node, 'index'):
del node.index
if len(items) == 1 and node.layer_type in self.multi_out_ops:
......
......@@ -2013,3 +2013,25 @@ class OpSet9():
"k": val_k.name},
outputs=["{}_p{}".format(node.layer_name, 0), "{}_p{}".format(node.layer_name, 1)],
**layer_attrs)
@print_mapping_info
def LRN(self, node):
op_name = name_generator("lrn", self.nn_name2id)
output_name = node.name
layer_outputs = [op_name, output_name]
val_x = self.graph.get_input_node(node, idx=0, copy=True)
alpha = node.get_attr('alpha', 0.0001)
beta = node.get_attr('beta', 0.75)
bias = node.get_attr('bias', 1.0)
size = node.get_attr('size')
layer_attrs = {
'size': size,
'alpha': alpha,
'beta': beta,
'k': bias
}
self.paddle_graph.add_layer(
"paddle.nn.LocalResponseNorm",
inputs={"x": val_x.name},
outputs=layer_outputs,
**layer_attrs)
......@@ -1780,4 +1780,24 @@ class OpSet9():
inputs={"x": val_x.name,
"k": val_k.name},
outputs=["{}_p{}".format(node.layer_name, 0), "{}_p{}".format(node.layer_name, 1)],
**layer_attrs)
\ No newline at end of file
**layer_attrs)
@print_mapping_info
def LRN(self, node):
val_x = self.graph.get_input_node(node, idx=0, copy=True)
val_x = self.graph.get_input_node(node, idx=0, copy=True)
alpha = node.get_attr('alpha', 0.0001)
beta = node.get_attr('beta', 0.75)
bias = node.get_attr('bias', 1.0)
size = node.get_attr('size')
layer_attrs = {
'size': size,
'alpha': alpha,
'beta': beta,
'k': bias
}
self.paddle_graph.add_layer(
'paddle.nn.functional.local_response_norm',
inputs={"x": val_x.name},
outputs=[node.name],
**layer_attrs)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册