未验证 提交 e19d3442 编写于 作者: S SunAhong1993 提交者: GitHub

Merge pull request #13 from PaddlePaddle/develop

docs
......@@ -5,12 +5,12 @@ X2Paddle支持将其余深度学习框架训练得到的模型,转换至Paddle
X2Paddle is a toolkit for converting trained model to PaddlePaddle from other deep learning frameworks.
## 转换模型库
X2Paddle在多个主流的CV模型上,测试过TensorFlow/Caffe/ONNX模型的转换,可以在[X2Paddle-Model-Zoo](x2paddle_model_zoo.md)查看我们的模型测试列表,可以在[OP-LIST](op_list.md)中查看目前X2Paddle支持的OP列表。如果你在新的模型上进行了测试转换,也欢迎继续补充该列表;如若无法转换,可通过ISSUE反馈给我们,我们会尽快跟进。
X2Paddle在多个主流的CV模型上,测试过TensorFlow/Caffe/ONNX/PyTorch模型的转换,可以在[X2Paddle-Model-Zoo](./docs/introduction/x2paddle_model_zoo.md)查看我们的模型测试列表,可以在[OP-LIST](./docs/introduction/op_list.md)中查看目前X2Paddle支持的OP列表。如果你在新的模型上进行了测试转换,也欢迎继续补充该列表;如若无法转换,可通过ISSUE反馈给我们,我们会尽快跟进。
## 环境依赖
python == 2.7 | python >= 3.5
paddlepaddle >= 2.0.0
paddlepaddle 2.0-rc 或者 develop
**按需安装以下依赖**
tensorflow : tensorflow == 1.14.0
......@@ -30,7 +30,7 @@ python setup.py install
### 安装方式二
我们会定期更新pip源上的x2paddle版本
```
pip install x2paddle --index https://pypi.Python.org/simple/
pip install x2paddle==1.0.0rc0 --index https://pypi.Python.org/simple/
```
## 使用方法
### TensorFlow
......@@ -47,7 +47,7 @@ x2paddle --framework=onnx --model=onnx_model.onnx --save_dir=pd_model
```
### PyTorch
> PyTorch不支持命令行使用方式,详见[PyTorch2Paddle](pytorch2paddle.md)
> PyTorch不支持命令行使用方式,详见[PyTorch2Paddle](./docs/user_guides/pytorch2paddle.md)
### Paddle2ONNX
> Paddle2ONNX功能已迁移至新的github: https://github.com/PaddlePaddle/paddle2onnx, 欢迎大家去新的代码仓库查看详细介绍以及新功能。
......@@ -62,15 +62,21 @@ x2paddle --framework=onnx --model=onnx_model.onnx --save_dir=pd_model
|--save_dir | 指定转换后的模型保存目录路径 |
|--model | 当framework为tensorflow/onnx时,该参数指定tensorflow的pb模型文件或onnx模型路径 |
|--caffe_proto | **[可选]** 由caffe.proto编译成caffe_pb2.py文件的存放路径,当存在自定义Layer时使用,默认为None |
|--define_input_shape | **[可选]** For TensorFlow, 当指定该参数时,强制用户输入每个Placeholder的shape,见[文档Q2](FAQ.md) |
|--define_input_shape | **[可选]** For TensorFlow, 当指定该参数时,强制用户输入每个Placeholder的shape,见[文档Q2](./docs/user_guides/FAQ.md) |
|--params_merge | **[可选]** 当指定该参数时,转换完成后,inference_model中的所有模型参数将合并保存为一个文件__params__ |
## 使用转换后的模型
- 静态图:
转换后的模型包括`model_with_code``inference_model`两个目录。
`model_with_code`中保存了模型参数,和转换后的python模型代码
`inference_model`中保存了序列化的模型结构和参数,可直接使用paddle的接口进行加载,见[load_inference_model](https://www.paddlepaddle.org.cn/documentation/docs/zh/1.5/api_guides/low_level/inference.html#api-guide-inference)
`model_with_code`中保存了模型参数,和转换后的python模型静态图代码。
`inference_model`中保存了序列化的模型结构和参数,可直接使用paddle的接口进行加载,见[paddle.static.load_inference_model](https://www.paddlepaddle.org.cn/documentation/docs/zh/2.0-rc/api/paddle/static/load_inference_model_cn.html#load-inference-model)
- 动态图:
转换后的模型包括`model.pdparams``x2paddle_code.py`两个文件,以及`inference_model`一个目录。
`model.pdparams`中保存了模型参数。
`x2paddle_code.py`是转换后的python模型动态图代码。
`inference_model`中保存了序列化的模型结构和参数,可直接使用paddle的接口进行加载,见[paddle.static.load_inference_model](https://www.paddlepaddle.org.cn/documentation/docs/zh/2.0-rc/api/paddle/static/load_inference_model_cn.html#load-inference-model)
## 小工具
X2Paddle提供了工具解决如下问题,详见[tools/README.md](tools/README.md)
......@@ -78,11 +84,12 @@ X2Paddle提供了工具解决如下问题,详见[tools/README.md](tools/README
2. 合并模型参数文件
## 相关文档
1. [X2Paddle使用过程中常见问题](FAQ.md)
2. [如何导出TensorFlow的pb模型](export_tf_model.md)
3. [X2Paddle测试模型库](x2paddle_model_zoo.md)
4. [PyTorch模型导出为ONNX模型](pytorch_to_onnx.md)
5. [X2Paddle内置的Caffe自定义层](caffe_custom_layer.md)
1. [X2Paddle使用过程中常见问题](./docs/user_guides/FAQ.md)
2. [如何导出TensorFlow的pb模型](./docs/user_guides/export_tf_model.md)
3. [X2Paddle测试模型库](./docs/introduction/x2paddle_model_zoo.md)
4. [X2Paddle支持的op列表](./docs/introduction/op_list.md)
5. [PyTorch模型导出为ONNX模型](./docs/user_guides/pytorch2onnx.md)
6. [X2Paddle添加内置的Caffe自定义层](./docs/user_guides/add_caffe_custom_layer.md)
## 更新历史
2019.08.05
......@@ -91,7 +98,15 @@ X2Paddle提供了工具解决如下问题,详见[tools/README.md](tools/README
3. 解决Windows上保存模型无法加载的问题
4. 新增optimizer,优化代码结构,合并conv、batch_norm的bias和激活函数
**如果你需要之前版本的tensorflow2fluid/caffe2fluid/onnx2fluid,可以继续访问release-0.3分支,获取之前版本的代码使用。**
2020.12.09
1. 新增PyTorch2Paddle转换方式,转换得到Paddle动态图代码,并动转静获得inference_model。
方式一:trace方式,转换后的代码有模块划分,每个模块的功能与PyTorch相同。
方式二:script方式,转换后的代码按执行顺序逐行出现。
2. 新增Caffe/ONNX/Tensorflow到Paddle动态图的转换。
3. 新增TensorFlow op(14个):Neg、Greater、FloorMod、LogicalAdd、Prd、Equal、Conv3D、Ceil、AddN、DivNoNan、Where、MirrorPad、Size、TopKv2
4. 新增Optimizer模块,主要包括op融合、op消除功能,转换后的代码可读性更强,进行预测时耗时更短。
**如果你需要之前版本的tensorflow2fluid/caffe2fluid/onnx2fluid,可以继续访问release-0.9分支,获取之前版本的代码使用。**
## Acknowledgements
......
目前,代码中已经提供了10个非官方op(不在[官网](http://caffe.berkeleyvision.org/tutorial/layers)上的op)的转换,这些op对应的Caffe实现源码如下:
| op | 该版本实现源码 |
|-------|--------|
| PriorBox | [code](https://github.com/weiliu89/caffe/blob/ssd/src/caffe/layers/prior_box_layer.cpp) |
| DetectionOutput | [code](https://github.com/weiliu89/caffe/blob/ssd/src/caffe/layers/detection_output_layer.cpp) |
| ConvolutionDepthwise | [code](https://github.com/farmingyard/caffe-mobilenet/blob/master/conv_dw_layer.cpp) |
| ShuffleChannel | [code](https://github.com/farmingyard/ShuffleNet/blob/master/shuffle_channel_layer.cpp) |
| Permute | [code](https://github.com/weiliu89/caffe/blob/ssd/src/caffe/layers/permute_layer.cpp) |
| Normalize | [code](https://github.com/weiliu89/caffe/blob/ssd/src/caffe/layers/normalize_layer.cpp) |
| ROIPooling | [code](https://github.com/rbgirshick/caffe-fast-rcnn/blob/0dcd397b29507b8314e252e850518c5695efbb83/src/caffe/layers/roi_pooling_layer.cpp) |
| Axpy | [code](https://github.com/hujie-frank/SENet/blob/master/src/caffe/layers/axpy_layer.cpp) |
| ReLU6 | [code](https://github.com/chuanqi305/ssd/blob/ssd/src/caffe/layers/relu6_layer.cpp) |
| Upsample | [code](https://github.com/eric612/MobileNet-YOLO/blob/master/src/caffe/layers/upsample_layer.cpp) |
# 一、 introduction
1. op_list.md:当前转换各个框架支持的op。
2. x2paddle_model_zoo.md:测试过的模型列表。
# 二、 user_guides
1. FQA.md:常见问题集合。
2. add_caffe_custom_layer.md:添加caffe自定义Layer的方法,以及当前支持的自定义Layer列表。
3. export_tf_model.md:导出本工具支持的TensorFlow模型。
4. pytorch2onnx.md:将PyTorch导出为ONNX。
5. pytorch2paddle.md:将PyTorch模型转换为Paddle模型。
......@@ -24,7 +24,11 @@
| 57 | Sqrt | 58 | Softplus | 59 | Erf | 60 | AddV2 |
| 61 | LessEqual | 62 | BatchMatMul | 63 | BatchMatMulV2 | 64 | ExpandDims |
| 65 | BatchToSpaceND | 66 | SpaceToBatchND | 67 | OneHot | 68 | Pow |
| 69 | All | 70 | GatherV2 | 71 | IteratorV2 | | |
| 69 | All | 70 | GatherV2 | 71 | IteratorV2 | 72 | Neg |
| 73 | Greater | 74 | FloorMod | 75 | LogicalAdd | 76 | Prod |
| 77 | Equal | 78 | Conv3D | 79 | Ceil | 80 | AddN |
| 81 | DivNoNan | 82 | Where | 83 | MirrorPad | 84 | Size |
| 85 | TopKv2 | | | | | | |
## Caffe
......@@ -58,3 +62,44 @@
| 45 | Squeeze | 46 | Equal | 47 | Identity | 48 | GlobalAveragePool |
| 49 | MaxPool | 50 | Conv | 51 | Gemm | 52 | NonZero |
| 53 | Abs | 54 | Floor |
## PyTorch
Aten:
| 序号 | OP | 序号 | OP | 序号 | OP | 序号 | OP |
|------|------|------|------|------|------|------|------|
| 1 | aten::abs | 2 | aten::adaptive_avg_pool2d | 3 | aten::addmm | 4 | aten::add |
| 5 | aten::add\_ | 6 | aten::\_\_and\_\_ | 7 | aten::append | 8 | aten::arange |
| 9 | aten::avg\_pool2d | 10 | aten::avg\_pool3d | 11 | aten::avg_pool1d | 12 | aten::batch_norm |
| 13 | aten::cat | 14 | aten::chunk | 15 | aten::clamp | 16 | aten::\_\_contains\_\_ |
| 17 | aten::constant\_pad\_nd | 18 | aten::contiguous | 19 | aten::conv2d | 20 | aten::\_convolution |
| 21 | aten::conv_transpose2d | 22 | aten::cos | 23 | aten::cumsum | 24 | aten::detach |
| 25 | aten::dict | 26 | aten::dim | 27 | aten::div\_ | 28 | aten::div |
| 29 | aten::dropout | 30 | aten::dropout_ | 31 | aten::embedding | 32 | aten::eq |
| 33 | aten::exp | 34 | aten::expand | 35 | aten::expand_as | 36 | aten::eye |
| 37 | aten::feature_dropout | 38 | aten::flatten | 39 | aten::Float | 40 | aten::floor |
| 41 | aten::floordiv | 42 | aten::floor_divide | 43 | aten::full_like | 44 | aten::gather |
| 45 | aten::gelu | 46 | aten::\_\_getitem\_\_ | 47 | aten::gt | 48 | aten::hardtanh\_ |
| 49 | aten::index\_select | 50 | aten::Int | 51 | aten::\_\_is\_\_ | 52 | aten::\_\_isnot\_\_ |
| 53 | aten::layer\_norm | 54 | aten::le |55|aten::leaky\_relu\_|56|aten::len|
| 57 | aten::log | 58 | aten::lt |59|aten::masked\_fil\l_|60|aten::masked\_fill|
| 61 | aten::max | 62 | aten::max\_pool2d |63|aten::matmul|64|aten\_min|
| 65 | aten::mean | 66 | aten::meshgrid |67|aten::mul|68|aten::mul\_|
| 69 | aten::ne | 70 | aten::neg |71|aten::\_\_not\_\_|72|aten::ones|
| 73 | aten::permute | 74 | aten::pow |75|aten::relu|76|aten::relu\_|
| 77 | aten::relu6 | 78 | aten::repeat |79|aten::reshape|80|aten::rsub|
| 81 | aten::ScalarImplicit | 82 | aten::select |83|aten::\_set\_item|84|aten::sigmoid|
| 85 | aten::sin | 86 | aten::size |87|aten::slice|88|aten::softmax|
| 89 | aten::softplus | 90 | aten::sqrt |91|aten::squeeze|92|aten::stack|
| 93 | aten::sub | 94 | aten::t |95|aten::tanh|96|aten::split|
| 97 | aten::transpose | 98 | aten::to |99|aten::type\_as|100|aten::unsqueeze|
| 101 | aten::upsample\_bilinear2d | 102 | aten::values |103|aten::view|104|aten::warn|
| 105 | aten::where | 106 | aten::zeros |107|aten::zeros\_like|||
Prim:
| 序号 | OP | 序号 | OP | 序号 | OP | 序号 | OP |
|------|------|------|------|------|------|------|------|
| 1 | prim::Constant | 2 | prim::data | 3 | prim::DictConstruct | 4 | prim::GetAttr |
| 5 | prim::If | 6 | prim::ListConstruct | 7 | prim::ListUnpack | 8 | prim::Loop |
| 9 | prim::min | 10 | prim::NumToTensor | 11 | prim::RaiseException | 12 | prim::requires\_grad |
| 13 | prim::SetAttr | 14 | prim::shape | 15 | prim::TupleConstruct | 16 | prim::TupleUnpack |
| 17 | prim::unchecked\_cast | 18 | prim::Uninitialized | ||||
......@@ -74,3 +74,25 @@
|Ultra-Light-Fast-Generic-Face-Detector-1MB| [onnx_model](https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/tree/master/models/onnx)|9 |
|BERT| [pytorch(huggingface)](https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb)|11|转换时需指定input shape,见[文档Q3](FAQ.md)|
|GPT2| [pytorch(huggingface)](https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb)|11|转换时需指定input shape,见[文档Q3](FAQ.md)|
## PyTorch
| 模型 | 代码 | 备注 |
|------|----------|------|
| AlexNet | [code](https://github.com/pytorch/vision/blob/master/torchvision/models/alexnet.py)|-|
| MNasNet | [code](https://github.com/pytorch/vision/blob/master/torchvision/models/mnasnet.py) |-|
| MobileNetV2 | [code](https://github.com/pytorch/vision/blob/master/torchvision/models/mobilenet.py) |-|
| ResNet18 | [code](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py) |-|
| ShuffleNetV2 | [code](https://github.com/pytorch/vision/blob/master/torchvision/models/shufflenet.py) |-|
| SqueezeNet | [code](https://github.com/pytorch/vision/blob/master/torchvision/models/squeezenet.py) |-|
| VGG16 | [code](https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py) |-|
| InceptionV3 | [code](https://github.com/pytorch/vision/blob/master/torchvision/models/inception.py) |-|
| DeepLabv3_ResNet50 | [code](https://github.com/pytorch/vision/blob/master/torchvision/models/segmentation/deeplabv3.py) |-|
| FCN_ResNet50 | [code](https://github.com/pytorch/vision/blob/master/torchvision/models/segmentation/fcn.py) |-|
| CamembertForQuestionAnswering | [code](https://huggingface.co/transformers/model_doc/camembert.html) |只支持trace模式|
| DPRContextEncoder | [code](https://huggingface.co/transformers/model_doc/dpr.html) |只支持trace模式|
| ElectraModel | [code](https://huggingface.co/transformers/model_doc/electra.html ) |只支持trace模式|
| FlaubertModel | [code](https://huggingface.co/transformers/model_doc/flaubert.html) |只支持trace模式|
| Roberta| [code](https://huggingface.co/transformers/model_doc/roberta.html) |只支持trace模式|
| XLMRobertaForTokenClassification|[code](https://huggingface.co/transformers/model_doc/xlmroberta.html) |只支持trace模式|
......@@ -23,3 +23,20 @@ A: 此提示为错误信息,表示该模型的转换需要固定的输入大
> 1. 模型来源于PaddleX导出,可以在导出的命令中,指定--fixed_input_shape=[Height,Width],详情可见:[PaddleX模型导出文档](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/deploy/export_model.md)。
> 2. 模型来源于PaddleDetection导出,可以在导出模型的时候,指定 TestReader.inputs_def.image_shape=[Channel,Height,Width], 详情可见:[PaddleDetection模型导出文档](https://github.com/PaddlePaddle/PaddleDetection/blob/master/docs/advanced_tutorials/deploy/EXPORT_MODEL.md#设置导出模型的输入大小)。
> 3. 模型来源于自己构建,可在网络构建的`fluid.data(shape=[])`中,指定shape参数来固定模型的输入大小。
**Q6. 进行动态图转换时,提示『Fail to generate inference model! Problem happend while export inference model from python code...』**
A: 此提示为无法将动态图代码转换为静态图模型,有两种可能:
> 使用动态图代码确认转换后的代码是否正确,可使用如下代码进行确认:
```
import paddle
import numpy as np
np.random.seed(6)
# ipt为输入数据
ipt = np.random.rand(1, 3, 224, 224).astype("float32")
paddle.disable_static()
# pd_model_dygraph为保存路径(其中的”/“用”.“替换)
from pd_model_dygraph.x2paddle_code import main
out =main(ipt)
```
> 若运行代码无误,则说明代码中有op不支持动转静,我们将会再未来支持;若报错,则说明pytorch2paddle转换出错,请提issue,我们将及时回复。
\ No newline at end of file
## 如何转换Caffe自定义Layer
本文档介绍如何将Caffe自定义Layer转换为PaddlePaddle模型中的对应实现, 用户可根据自己需要,添加代码实现自定义层,从而支持模型的完整转换。
目前,代码中已经提供了10个非官方op(不在[官网](http://caffe.berkeleyvision.org/tutorial/layers)上的op)的转换,这些op对应的Caffe实现源码如下:
| op | 该版本实现源码 |
|-------|--------|
| PriorBox | [code](https://github.com/weiliu89/caffe/blob/ssd/src/caffe/layers/prior_box_layer.cpp) |
| DetectionOutput | [code](https://github.com/weiliu89/caffe/blob/ssd/src/caffe/layers/detection_output_layer.cpp) |
| ConvolutionDepthwise | [code](https://github.com/farmingyard/caffe-mobilenet/blob/master/conv_dw_layer.cpp) |
| ShuffleChannel | [code](https://github.com/farmingyard/ShuffleNet/blob/master/shuffle_channel_layer.cpp) |
| Permute | [code](https://github.com/weiliu89/caffe/blob/ssd/src/caffe/layers/permute_layer.cpp) |
| Normalize | [code](https://github.com/weiliu89/caffe/blob/ssd/src/caffe/layers/normalize_layer.cpp) |
| ROIPooling | [code](https://github.com/rbgirshick/caffe-fast-rcnn/blob/0dcd397b29507b8314e252e850518c5695efbb83/src/caffe/layers/roi_pooling_layer.cpp) |
| Axpy | [code](https://github.com/hujie-frank/SENet/blob/master/src/caffe/layers/axpy_layer.cpp) |
| ReLU6 | [code](https://github.com/chuanqi305/ssd/blob/ssd/src/caffe/layers/relu6_layer.cpp) |
| Upsample | [code](https://github.com/eric612/MobileNet-YOLO/blob/master/src/caffe/layers/upsample_layer.cpp) |
添加代码实现自定义层的步骤如下:
***步骤一 下载代码***
此处涉及修改源码,应先卸载x2paddle,并且下载源码,主要有以下两步完成:
```
......@@ -10,7 +27,7 @@ pip install git+https://github.com/PaddlePaddle/X2Paddle.git@develop
***步骤二 编译caffe.proto***
该步骤依赖protobuf编译器,其安装过程有以下两种方式:
> 选择一:pip install protobuf
> 选择一:pip install protobuf (protobuf >= 3.6.0)
> 选择二:使用[官方源码](https://github.com/protocolbuffers/protobuf)进行编译
使用脚本./tools/compile.sh将caffe.proto(包含所需的自定义Layer信息)编译成我们所需的目标语言(Python)
......@@ -21,8 +38,9 @@ bash ./toos/compile.sh /home/root/caffe/src/caffe/proto
```
***步骤三 添加自定义Layer的实现代码***
- 进入./x2paddle/op_mapper/caffe_custom_layer,创建.py文件,例如mylayer.py
- 仿照./x2paddle/op_mapper/caffe_custom_layer中的其他文件,在mylayer.py中主要需要实现3个函数,下面以roipooling.py为例分析代码:
**静态图方式:**
- 进入./x2paddle/op_mapper/static/caffe2paddle/caffe_custom_layer,创建.py文件,例如mylayer.py
- 仿照./x2paddle/op_mapper/static/caffe2paddle/caffe_custom_layer中的其他文件,在mylayer.py中主要需要实现3个函数,下面以roipooling.py为例分析代码:
1. `def roipooling_shape(input_shape, pooled_w=None, pooled_h=None)`
参数:
1. input_shape(list):其中每个元素代表该层每个输入数据的shape,为必须传入的参数
......@@ -63,7 +81,116 @@ bash ./toos/compile.sh /home/root/caffe/src/caffe/proto
```
from . import roipooling
```
**动态图方式:**
> 【注意】若Caffe自定义layer与Paddle的op一一对应,使用方式一,否则使用方式二。
- 方式一:
1. 仿照./x2paddle/op_mapper/dygraph/caffe2paddle/caffe_op_mapper.py中的CaffeOpMapper类中的映射方法(输入为self和node),实现类似的映射方法,以下述的映射方法为例:
```python
def Permute(self, node):
assert len(
node.inputs) == 1, "The count of Permute node\'s input is not 1."
input = self.graph.get_input_node(node, idx=0, copy=True)
params = node.layer.permute_param
order = list(params.order)
self.paddle_graph.add_layer(
"paddle.transpose",
inputs={"x": input.name},
outputs=[node.layer_name],
perm=order)
```
>需完成的步骤:
> a. 获取Caffe Layer的属性,并对应转换为Paddle的属性。
> b. 获取当前Layer的输入。
> c. 使用self.paddle_graph.add_layer为PaddleGraph添加layer。其中,第一个参数代表Paddle的kernel;inputs是一个字典,用于存储paddle中的输入的key与其输入名字;outputs是一个列表,用于存储输出的名字;其余参数为属性对应关系。
2. 仿照./x2paddle/decoder/caffe_shape_inference.py中的shape_xx方法,实现获取当前Layer输出大小的函数,以下述方法为例:
```python
def shape_permute(layer, input_shape):
order = layer.permute_param.order
inshape = input_shape[0]
output_shape = []
order = list(order)
for ii in order:
assert ii < len(inshape), "invalid order for permute[%s]" % (name)
output_shape.append(inshape[ii])
return [output_shape]
```
>参数:
> layer (caffe_pb2.LayerParameter): caffe的Layer,可用于获取当前Layer的属性。
> input_shape (list): 其中每个元素代表该层每个输入数据的大小。
- 方式二:
1. 进入./x2paddle/op_mapper/dygraph/caffe2paddle/caffe_custom_layer,创建.py文件,例如mylayer.py
2. 仿照./x2paddle/op_mapper/dygraph/caffe2paddle/caffe_custom_layer中的其他文件,在mylayer.py中主要需要实现1个类,下面以roipooling.py为例分析代码:
```python
class ROIPooling(object):
def __init__(self, pooled_height, pooled_width, spatial_scale):
self.roipooling_layer_attrs = {
"pooled_height": pooled_height,
"pooled_width": pooled_width,
"spatial_scale": spatial_scale}
def __call__(self, x0, x1):
slice_x1 = paddle.slice(input=x1, axes=[1],
starts=[1], ends=[5])
out = fluid.layers.roi_pool(input=x0,
rois=slice_x1,
**self.roipooling_layer_attrs)
return out
```
>\_\_init\_\_函数:用于初始化各个属性
>\_\_call\_\_函数:用于组合实现当前Layer的前向,输入为当前Layer所需要的输入
3. 仿照./x2paddle/op_mapper/dygraph/caffe2paddle/caffe_op_mapper.py中的CaffeOpMapper类中的映射方法(输入为self和node),实现类似的映射方法,以下述的映射方法为例:
```python
def ROIPooling(self, node):
roipooling_name = name_generator("roipooling", self.nn_name2id)
output_name = node.layer_name
layer_outputs = [roipooling_name, output_name]
assert len(
node.inputs) == 2, "The count of ROIPooling node\'s input is not 2."
input0 = self.graph.get_input_node(node, idx=0, copy=True)
input1 = self.graph.get_input_node(node, idx=1, copy=True)
inputs_dict = {}
inputs_dict["x0"] = input0.name
inputs_dict["x1"] = input1.name
params = node.layer.roi_pooling_param
layer_attrs = {
"pooled_height": params.pooled_h,
"pooled_width": params.pooled_w,
"spatial_scale": params.spatial_scale}
self.paddle_graph.add_layer(
"custom_layer:ROIPooling",
inputs=inputs_dict,
outputs=layer_outputs,
**layer_attrs)
```
>需完成的步骤:
> a. 获取Caffe Layer的属性,并对应转换为Paddle的属性。
> b. 获取当前Layer的输入。
> c. 使用self.paddle_graph.add_layer为PaddleGraph添加layer。其中,第一个参数代表Paddle的kernel(此处kernel必须以“custom_layer:“开头);inputs是一个字典,用于存储paddle中的输入的key与其输入名字;outputs是一个列表,用于存储输出的名字;其余参数为属性对应关系。
4. 仿照./x2paddle/decoder/caffe_shape_inference.py中的shape_xx方法,实现获取当前Layer输出大小的函数,以下述方法为例:
```python
def shape_roipooling(layer, input_shape):
pooled_w = layer.roi_pooling_param.pooled_w
pooled_h = layer.roi_pooling_param.pooled_h
base_fea_shape = input_shapes[0]
rois_shape = input_shapes[1]
output_shape = base_fea_shape
output_shape[0] = rois_shape[0]
output_shape[2] = pooled_h
output_shape[3] = pooled_w
return [output_shape]
```
>参数:
> layer (caffe_pb2.LayerParameter): caffe的Layer,可用于获取当前Layer的属性。
> input_shape (list): 其中每个元素代表该层每个输入数据的大小。
***步骤四 运行转换代码***
```
......
## 如何导出TensorFlow模型
本文档介绍如何将TensorFlow模型导出为X2Paddle支持的模型格式。
TensorFlow目前一般分为3种保存格式(checkpoint、FrozenModel、SavedModel),X2Paddle支持的是FrozenModel(将网络参数和网络结构同时保存到同一个文件中,并且只保存指定的前向计算子图),下面示例展示了如何导出X2Paddle支持的模型格式。
TensorFlow提供了接口可将网络参数和网络结构同时保存到同一个文件中,并且只保存指定的前向计算子图,下面示例展示了如何导出tensorflow/models下的VGG16模型
***下列代码Tensorflow 1.X下使用***
- checkpoint模型+代码
步骤一 下载模型参数文件
```
wget http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz
......@@ -42,3 +43,70 @@ load_model(sess)
# 导出模型
freeze_model(sess, ["vgg_16/fc8/squeezed"], "vgg16.pb")
```
- 纯checkpoint模型
文件结构:
> |--- checkpoint
> |--- model.ckpt-240000.data-00000-of-00001
> |--- model.ckpt-240000.index
> |--- model.ckpt-240000.meta
加载和导出模型:
```python
#coding: utf-8
from tensorflow.python.framework import graph_util
import tensorflow as tf
# 固化模型函数
# output_tensor_names: list,指定模型的输出tensor的name
# freeze_model_path: 模型导出的文件路径
def freeze_model(sess, output_tensor_names, freeze_model_path):
out_graph = graph_util.convert_variables_to_constants(
sess, sess.graph.as_graph_def(), output_tensor_names)
with tf.gfile.GFile(freeze_model_path, 'wb') as f:
f.write(out_graph.SerializeToString())
print("freeze model saved in {}".format(freeze_model_path))
# 加载模型参数
# 此处需要修改input_checkpoint(checkpoint的前缀)和save_pb_file(模型导出的文件路径)
input_checkpoint = "./tfhub_models/save/model.ckpt"
save_pb_file = "./tfhub_models/save.pb"
sess = tf.Session()
saver = tf.train.import_meta_graph(input_checkpoint + '.meta', clear_devices=True)
saver.restore(sess, input_checkpoint)
# 此处需要修改freeze_model的第二个参数,指定模型的输出tensor的name
freeze_model(sess, ["vgg_16/fc8/squeezed"], save_pb_file)
```
- SavedModel模型
文件结构:
> |-- variables
> |------ variables.data-00000-of-00001
> |------ variables.data-00000-of-00001
> |-- saved_model.pb
加载和导出模型:
```python
#coding: utf-8
import tensorflow as tf
sess = tf.Session(graph=tf.Graph())
# tf.saved_model.loader.load最后一个参数代表saved_model的保存路径
tf.saved_model.loader.load(sess, {}, "/mnt/saved_model")
graph = tf.get_default_graph()
from tensorflow.python.framework import graph_util
# 固化模型函数
# output_tensor_names: list,指定模型的输出tensor的name
# freeze_model_path: 模型导出的文件路径
def freeze_model(sess, output_tensor_names, freeze_model_path):
out_graph = graph_util.convert_variables_to_constants(
sess, sess.graph.as_graph_def(), output_tensor_names)
with tf.gfile.GFile(freeze_model_path, 'wb') as f:
f.write(out_graph.SerializeToString())
print("freeze model saved in {}".format(freeze_model_path))
# 导出模型
freeze_model(sess, ["logits"], "model.pb")
```
......@@ -46,7 +46,7 @@ torch_module.load_state_dict(torch_state_dict)
torch_module.eval()
# 进行转换
from x2paddle.convert import pytorch2paddle
pytorch2paddle(torch_model,
pytorch2paddle(torch_module,
save_dir="pd_model_trace",
jit_type="trace",
input_examples=[torch.tensor(input_data)])
......
import paddle
import paddle.fluid as fluid
import sys
model_dir = sys.argv[1]
new_model_dir = sys.argv[2]
exe = fluid.Executor(fluid.CPUPlace())
paddle.enable_static()
exe = paddle.static.Executor(paddle.CPUPlace())
[inference_program, feed_target_names,
fetch_targets] = fluid.io.load_inference_model(
fetch_targets] = paddle.static.load_inference_model(
dirname=model_dir, executor=exe)
print(feed_target_names)
fluid.io.save_inference_model(
paddle.static.save_inference_model(
dirname=new_model_dir,
feeded_var_names=feed_target_names,
target_vars=fetch_targets,
......
......@@ -132,18 +132,9 @@ def tf2paddle(model_path,
graph_opt = GraphOptimizer(source_frame="tf", paddle_type=paddle_type)
graph_opt.optimize(mapper.paddle_graph)
else:
from x2paddle.optimizer.tensorflow.bias import BiasOpt
from x2paddle.optimizer.tensorflow.transpose import TransposeOpt
from x2paddle.optimizer.tensorflow.batch_norm import BatchNormOpt
from x2paddle.optimizer.tensorflow.prelu import PReLUOpt
bias_opt = BiasOpt()
transpose_opt = TransposeOpt()
batch_norm_opt = BatchNormOpt()
prelu_opt = PReLUOpt()
bias_opt.run(mapper.paddle_graph)
batch_norm_opt.run(mapper.paddle_graph)
prelu_opt.run(mapper.paddle_graph)
transpose_opt.run(mapper.paddle_graph)
from x2paddle.optimizer.optimizer import GraphOptimizer
graph_opt = GraphOptimizer(source_frame="tf", paddle_type=paddle_type)
graph_opt.optimize(mapper.paddle_graph)
mapper.paddle_graph.gen_model(save_dir)
......
......@@ -285,7 +285,6 @@ class TFOpMapper(OpMapper):
layer_attrs["dtype"] = string(input_value.dtype)
layer_attrs["fill_value"] = input_value.value
self.paddle_graph.add_layer(
"paddle.full",
inputs=inputs,
......@@ -579,6 +578,9 @@ class TFOpMapper(OpMapper):
outputs=[node.name],
perm=[0, 2, 3, 1])
def FusedBatchNormV3(self, node):
self.FusedBatchNorm(node)
def Mean(self, node):
input = self.graph.get_input_node(node, 0)
reduce_idx = self.graph.get_input_node(node, 1)
......@@ -930,6 +932,23 @@ class TFOpMapper(OpMapper):
outputs=[node.name],
axis=axis)
def Concat(self, node):
inputs_list = list()
for i in range(1, len(node.inputs)):
inputs_list.append(self.graph.get_input_node(node, i))
axis = self.graph.get_input_node(node, 0)
assert axis.layer_type == "Const", "axis for ConcatV2 must be type Const"
axis = axis.value
if axis < 0:
axis += len(inputs_list[0].out_shapes[0])
input_names = [i.name for i in inputs_list]
self.paddle_graph.add_layer(
kernel="paddle.concat",
inputs={"x": input_names},
outputs=[node.name],
axis=axis)
def AddN(self, node):
inputs_list = list()
for i in range(len(node.inputs) - 1):
......@@ -1400,6 +1419,7 @@ class TFOpMapper(OpMapper):
inputs = {"x": x.name, "y": y.name}
x_shape = x.out_shapes[0]
y_shape = y.out_shapes[0]
# TODO(syf)
layer_id = self.paddle_graph.add_layer(
"fluid.layers.elementwise_sub", inputs=inputs, outputs=[node.name])
self.paddle_graph.layers[layer_id].input_shapes = {"x": x_shape, "y": y_shape}
......
......@@ -457,7 +457,6 @@ class CaffeOpMapper(OpMapper):
def ReLU(self, node):
"""
:param node:
:return:
"""
......@@ -974,4 +973,3 @@ class CaffeOpMapper(OpMapper):
kernel=op_info,
inputs={"x": self.get_input_name(input)},
outputs=[node.layer_name])
\ No newline at end of file
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .transpose_elimination import StaticTransposeElimination
from .transpose_eliminate_pass import StaticTransposeEliminatePass
\ No newline at end of file
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from x2paddle.optimizer.pass_ import Pass
from x2paddle.optimizer.elimination.static import StaticTransposeElimination
from x2paddle.optimizer.pass_manager import pass_register
@pass_register
class StaticTransposeEliminatePass(Pass):
name = "static_transpose_eliminate_pass"
def __init__(self):
Pass.__init__(self)
def apply(self, graph):
fuser = StaticTransposeElimination()
fuser.operate(graph)
# 用于注册
static_transpose_eliminate_pass = StaticTransposeEliminatePass()
\ No newline at end of file
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import sys
import numpy as np
from x2paddle.optimizer.pattern_matcher import FuseBase
from x2paddle.core.program import PaddleGraph, PaddleLayer
from x2paddle.core.util import *
class TransposeOpt:
class StaticTransposeElimination(FuseBase):
def __init__(self):
self.image_layers = [
'fluid.layers.conv2d', 'fluid.layers.batch_norm',
'fluid.layers.conv2d_transpose', 'fluid.layers.resize_nearest',
'fluid.layers.resize_bilinear', 'fluid.layers.pool2d',
'fluid.layers.pad2d'
]
super(StaticTransposeElimination, self).__init__(graph_type="static")
self.direct_layers = [
'fluid.layers.relu', 'fluid.layers.relu6', 'fluid.layers.abs',
'fluid.layers.sigmoid', 'fluid.layers.exp', 'fluid.layers.rsqrt',
'fluid.layers.swish_f32', 'fluid.layers.tanh',
'fluid.layers.softplus', 'fluid.layers.leaky_relu',
'fluid.layers.floor', 'fluid.layers.erf', 'fluid.layers.swish'
'paddle.nn.functional.relu', 'paddle.nn.functional.relu6', 'paddle.abs',
'paddle.nn.functional.sigmoid', 'paddle.exp', 'paddle.rsqrt',
'paddle.nn.functional.swish', 'paddle.tanh',
'paddle.nn.functional.softplus', 'paddle.nn.functional.leaky_relu',
'paddle.floor', 'paddle.erf', 'paddle.square'
]
self.elementwise_layers = [
'fluid.layers.elementwise_add', 'fluid.layers.elementwise_sub',
'fluid.layers.elementwise_mul', 'fluid.layers.elementwise_div'
'paddle.add', 'fluid.layers.elementwise_sub',
'paddle.multiply', 'paddle.divide'
]
self.reduce_layers = [
'fluid.layers.reduce_mean', 'fluid.layers.reduce_all',
'fluid.layers.reduce_max', 'fluid.layers.reduce_any',
'fluid.layers.reduce_sum', 'fluid.layers.reduce_prod'
'paddle.mean', 'paddle.all',
'paddle.max', 'paddle.any',
'paddle.sum', 'paddle.prod'
]
def get_transpose_num(self, graph):
count = 0
for layer_id, layer in graph.layers.items():
if layer.kernel == "fluid.layers.transpose":
if layer.kernel == "paddle.transpose":
count += 1
return count
def run(self, graph):
print("Optimize: TransposeOpt...")
def operate(self, graph):
total_layer_num = len(graph.layers)
scanned_layers = set()
optimized_transpose_layers = list()
......@@ -43,6 +55,12 @@ class TransposeOpt:
optimized_concat_layers = list()
optimized_elementwise_layers = list()
def get_index(layer):
if layer.kernel.startswith("paddle.nn") and "functional" not in layer.kernel:
return 1
else:
return 0
def strip_transpose(_graph):
layers = copy.deepcopy(_graph.layers)
for layer_id, layer in layers.items():
......@@ -53,7 +71,7 @@ class TransposeOpt:
sys.stderr.write("\rOptimize Transpose Layers...{}%".format(
percent))
if layer.kernel != "fluid.layers.transpose":
if layer.kernel != "paddle.transpose":
continue
if layer.attrs["perm"] != [0, 2, 3, 1]:
continue
......@@ -65,7 +83,7 @@ class TransposeOpt:
elementwise_layers = list()
can_be_optimized = True
for out in _graph.edges_out.get(layer_id, []):
if _graph.layers[out].kernel == "fluid.layers.transpose":
if _graph.layers[out].kernel == "paddle.transpose":
if _graph.layers[out].attrs["perm"] != [0, 3, 1, 2]:
can_be_optimized = False
break
......@@ -73,21 +91,24 @@ class TransposeOpt:
elif _graph.layers[out].kernel in self.elementwise_layers:
propagate_layers.append(out)
elif _graph.layers[out].kernel in self.direct_layers:
if _graph.layers[out].outputs[0] in _graph.outputs:
ouput_index = get_index(_graph.layers[out])
if _graph.layers[out].outputs[ouput_index] in _graph.outputs:
can_be_optimized = False
break
propagate_layers.append(out)
elif _graph.layers[out].kernel in self.reduce_layers:
if _graph.layers[out].outputs[0] in _graph.outputs:
ouput_index = get_index(_graph.layers[out])
if _graph.layers[out].outputs[ouput_index] in _graph.outputs:
can_be_optimized = False
break
if not _graph.layers[out].attrs.get('keep_dim', False):
if not _graph.layers[out].attrs.get('keepdim', False):
can_be_optimized = False
break
propagate_layers.append(out)
reduce_layers.append(out)
elif _graph.layers[out].kernel == "fluid.layers.concat":
if _graph.layers[out].outputs[0] in _graph.outputs:
elif _graph.layers[out].kernel == "paddle.concat":
ouput_index = get_index(_graph.layers[out])
if _graph.layers[out].outputs[ouput_index] in _graph.outputs:
can_be_optimized = False
break
propagate_layers.append(out)
......@@ -102,37 +123,41 @@ class TransposeOpt:
visited_layers.add(current_id)
for out in _graph.edges_out.get(current_id, []):
if _graph.layers[
out].kernel == "fluid.layers.transpose":
out].kernel == "paddle.transpose":
if _graph.layers[out].attrs["perm"] != [0, 3, 1, 2]:
can_be_optimized = False
break
transpose_layers.append(out)
elif _graph.layers[
out].kernel in self.elementwise_layers:
if _graph.layers[out].outputs[0] in _graph.outputs:
output_index = get_index(_graph.layers[out])
if _graph.layers[out].outputs[output_index] in _graph.outputs:
can_be_optimized = False
break
if out not in visited_layers:
propagate_layers.append(out)
elif _graph.layers[out].kernel in self.direct_layers:
if _graph.layers[out].outputs[0] in _graph.outputs:
output_index = get_index(_graph.layers[out])
if _graph.layers[out].outputs[output_index] in _graph.outputs:
can_be_optimized = False
break
if out not in visited_layers:
propagate_layers.append(out)
elif _graph.layers[out].kernel in self.reduce_layers:
if _graph.layers[out].outputs[0] in _graph.outputs:
output_index = get_index(_graph.layers[out])
if _graph.layers[out].outputs[output_index] in _graph.outputs:
can_be_optimized = False
break
if not _graph.layers[out].attrs.get('keep_dim',
if not _graph.layers[out].attrs.get('keepdim',
False):
can_be_optimized = False
break
if out not in visited_layers:
propagate_layers.append(out)
reduce_layers.append(out)
elif _graph.layers[out].kernel == "fluid.layers.concat":
if _graph.layers[out].outputs[0] in _graph.outputs:
elif _graph.layers[out].kernel == "paddle.concat":
output_index = get_index(_graph.layers[out])
if _graph.layers[out].outputs[output_index] in _graph.outputs:
can_be_optimized = False
break
if out not in visited_layers:
......@@ -149,14 +174,15 @@ class TransposeOpt:
current_id].input_shapes['x']
y_shape = _graph.layers[
current_id].input_shapes['y']
output_index = get_index(_graph.layers[ipt])
if _graph.layers[ipt].outputs[
0] == _graph.layers[current_id].inputs[
output_index] == _graph.layers[current_id].inputs[
'x']:
if len(x_shape) <= 1:
elementwise_layers.append(current_id)
continue
elif _graph.layers[ipt].outputs[
0] == _graph.layers[current_id].inputs[
output_index] == _graph.layers[current_id].inputs[
'y']:
if len(y_shape) <= 1:
elementwise_layers.append(current_id)
......@@ -168,8 +194,9 @@ class TransposeOpt:
except Exception as e:
can_be_optimized = False
break
output_index = get_index(_graph.layers[ipt])
if _graph.layers[
ipt].kernel == "fluid.layers.transpose":
ipt].kernel == "paddle.transpose":
if _graph.layers[ipt].attrs["perm"] != [0, 2, 3, 1]:
can_be_optimized = False
break
......@@ -177,30 +204,30 @@ class TransposeOpt:
transpose_layers.append(ipt)
elif _graph.layers[
ipt].kernel in self.elementwise_layers:
if _graph.layers[ipt].outputs[0] in _graph.outputs:
if _graph.layers[ipt].outputs[output_index] in _graph.outputs:
can_be_optimized = False
break
if ipt not in visited_layers:
propagate_layers.append(ipt)
elif _graph.layers[ipt].kernel in self.direct_layers:
if _graph.layers[ipt].outputs[0] in _graph.outputs:
if _graph.layers[ipt].outputs[output_index] in _graph.outputs:
can_be_optimized = False
break
if ipt not in visited_layers:
propagate_layers.append(ipt)
elif _graph.layers[ipt].kernel in self.reduce_layers:
if _graph.layers[ipt].outputs[0] in _graph.outputs:
if _graph.layers[ipt].outputs[output_index] in _graph.outputs:
can_be_optimized = False
break
if not _graph.layers[ipt].attrs.get('keep_dim',
if not _graph.layers[ipt].attrs.get('keepdim',
False):
can_be_optimized = False
break
if ipt not in visited_layers:
propagate_layers.append(ipt)
reduce_layers.append(ipt)
elif _graph.layers[ipt].kernel == "fluid.layers.concat":
if _graph.layers[ipt].outputs[0] in _graph.outputs:
elif _graph.layers[ipt].kernel == "paddle.concat":
if _graph.layers[ipt].outputs[output_index] in _graph.outputs:
can_be_optimized = False
break
if ipt not in visited_layers:
......@@ -217,7 +244,8 @@ class TransposeOpt:
transpose_layers.append(layer_id)
transpose_layers = list(set(transpose_layers))
for l in transpose_layers:
if graph.layers[l].outputs[0] in graph.outputs:
output_index = get_index(graph.layers[l])
if graph.layers[l].outputs[output_index] in graph.outputs:
can_be_optimized = False
break
if not can_be_optimized:
......@@ -243,17 +271,19 @@ class TransposeOpt:
for layer_id in list(set(optimized_transpose_layers)):
graph.del_layer(layer_id)
for layer_id in list(set(optimized_reduce_layers)):
dim = graph.layers[layer_id].attrs.get('dim', None)
dim = graph.layers[layer_id].attrs.get('axis', None)
if dim is not None:
for i in range(len(dim)):
dim[i] = [0, 2, 3, 1][dim[i]]
graph.layers[layer_id].attrs['dim'] = dim
graph.layers[layer_id].attrs['axis'] = dim
for layer_id in list(set(optimized_concat_layers)):
axis = graph.layers[layer_id].attrs.get('axis', 0)
graph.layers[layer_id].attrs['axis'] = [0, 2, 3, 1][axis]
for layer_id in list(set(optimized_elementwise_layers)):
axis = graph.layers[layer_id].attrs.get('axis', -1)
graph.layers[layer_id].attrs['axis'] = [0, 2, 3, 1][axis]
if graph.layers[layer_id].kernel == "paddle.add":
graph.layers[layer_id].kernel = "fluid.layers.elementwise_add"
current_transpose_num = self.get_transpose_num(graph)
print(
......
......@@ -105,10 +105,6 @@ class DygraphConv2DAddFuser(FuseBase):
if layer.kernel == "paddle.nn.Conv2D":
conv_id = layer_id
for layer_id, layer in matches.items():
if layer.kernel == "paddle.nn.functional.conv2d_transpose":
layer.bias = bias_name
if not is_transpose:
layer.outputs[0] = output_name
if layer.kernel == "paddle.nn.Conv2D":
layer.attrs["bias_attr"] = bias_name
if not is_transpose:
......
......@@ -14,3 +14,10 @@
from .bn_scale_fuser import Static_BNScaleFuser
from .bn_scale_fuse_pass import Static_BNScaleFusePass
from .conv2d_add_fuser import StaticConv2DAddFuser
from .conv2d_add_fuse_pass import StaticConv2DAddFusePass
from .prelu_fuser import StaticPReLUFuser
from .prelu_fuse_pass import StaticPReLUFusePass
from .tf_batchnorm_fuser import StaticTFBatchNormFuser
from .tf_batchnorm_fuse_pass import StaticTFBatchNormFusePass
......@@ -79,7 +79,6 @@ class Static_BNScaleFuser(FuseBase):
graph.layers[new_layer_id] = new_layer
matches.pop(new_layer_id)
def gen_new_layer(self, parameters, matches):
layers_id = list(matches.keys())
layer = matches[layers_id[0]]
......
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from x2paddle.optimizer.pass_ import Pass
from x2paddle.optimizer.fusion.static import StaticConv2DAddFuser
from x2paddle.optimizer.pass_manager import pass_register
@pass_register
class StaticConv2DAddFusePass(Pass):
name = "static_conv2d_add_fuse_pass"
def __init__(self):
Pass.__init__(self)
def apply(self, graph):
fuser = StaticConv2DAddFuser()
fuser.operate(graph, match_kind="edge")
# 用于注册
static_conv2d_add_fuse_pass = StaticConv2DAddFusePass()
\ No newline at end of file
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import numpy as np
from x2paddle.optimizer.pattern_matcher import FuseBase
from x2paddle.core.program import PaddleGraph, PaddleLayer
from x2paddle.core.util import *
class StaticConv2DAddFuser(FuseBase):
def __init__(self):
super(StaticConv2DAddFuser, self).__init__(graph_type="static")
self.patterns = list()
def build_pattern(self):
""" 描述需要替换的conv2d+add图结构。
conv2d+add层模式python实现代码示例:
模式一:
MobilenetV1_Logits_Conv2d_1c_1x1_biases = paddle.static.create_parameter(dtype='float32', shape=[1001], name='MobilenetV1_Logits_Conv2d_1c_1x1_biases', default_initializer=paddle.nn.initializer.Constant(value=0.0))
conv2d_transpose_14 = paddle.transpose(x=MobilenetV1_Logits_AvgPool_1a_AvgPool, perm=[0, 3, 1, 2])
MobilenetV1_Logits_Conv2d_1c_1x1_Conv2D = paddle.nn.functional.conv2d(x=conv2d_transpose_14, weight=MobilenetV1_Logits_Conv2d_1c_1x1_weights, bias=None, stride=[1, 1], dilation=[1, 1], padding='SAME')
MobilenetV1_Logits_Conv2d_1c_1x1_Conv2D = paddle.transpose(x=MobilenetV1_Logits_Conv2d_1c_1x1_Conv2D, perm=[0, 2, 3, 1])
MobilenetV1_Logits_Conv2d_1c_1x1_BiasAdd = paddle.add(x=MobilenetV1_Logits_Conv2d_1c_1x1_Conv2D, y=MobilenetV1_Logits_Conv2d_1c_1x1_biases)
模式二:
MobilenetV1_Logits_Conv2d_1c_1x1_biases = paddle.static.create_parameter(dtype='float32', shape=[1001], name='MobilenetV1_Logits_Conv2d_1c_1x1_biases', default_initializer=paddle.nn.initializer.Constant(value=0.0))
MobilenetV1_Logits_Conv2d_1c_1x1_Conv2D = paddle.nn.functional.conv2d(x=conv2d_transpose_14, weight=MobilenetV1_Logits_Conv2d_1c_1x1_weights, bias=None, stride=[1, 1], dilation=[1, 1], padding='SAME')
MobilenetV1_Logits_Conv2d_1c_1x1_BiasAdd = paddle.add(x=MobilenetV1_Logits_Conv2d_1c_1x1_Conv2D, y=MobilenetV1_Logits_Conv2d_1c_1x1_biases)
"""
def gen_name(id):
return "x" + str(id)
pattern = PaddleGraph(graph_type="dygraph")
pattern.add_layer(
"paddle.static.create_parameter",
inputs={},
outputs=[gen_name(0)])
pattern.add_layer(
kernel="paddle.transpose",
inputs={"x": "conv-input-0"},
outputs=[gen_name(1)],
perm=[0, 3, 1, 2])
pattern.add_layer(
kernel="paddle.nn.functional.conv2d",
inputs={"input": gen_name(1),
"weight": "conv-input-1"},
outputs=[gen_name(2)])
pattern.add_layer(
kernel="paddle.transpose",
inputs={"x": gen_name(2)},
outputs=[gen_name(2)],
perm=[0, 2, 3, 1])
pattern.add_layer(
kernel="paddle.add",
inputs={"x": gen_name(2),
"y": gen_name(0)},
outputs=[gen_name(3)])
pattern.build(inputs={"input-0": "conv-input-0",
"input-1": "conv-input-1"})
self.patterns.append(pattern)
pattern = PaddleGraph(graph_type="dygraph")
pattern.add_layer(
"paddle.static.create_parameter",
inputs={},
outputs=[gen_name(0)])
pattern.add_layer(
kernel="paddle.nn.functional.conv2d",
inputs={"input": "conv-input-0",
"weight": "conv-input-1"},
outputs=[gen_name(1)])
pattern.add_layer(
kernel="paddle.add",
inputs={"x": gen_name(1),
"y": gen_name(0)},
outputs=[gen_name(2)])
pattern.build(inputs={"input-0": "conv-input-0",
"input-1": "conv-input-1"})
self.patterns.append(pattern)
def insert_new_layer(self, graph, parameters, matches):
self.gen_new_layer(matches, graph)
matches_copy = copy.deepcopy(matches)
for layer_id, layer in matches_copy.items():
if layer.kernel not in ["paddle.add"]:
matches.pop(layer_id)
def gen_new_layer(self, matches, graph):
is_transpose = False
for layer_id, layer in matches.items():
if layer.kernel == "paddle.static.create_parameter":
bias_name = layer.attrs["name"][1: -1]
if layer.kernel == "paddle.transpose":
is_transpose = True
if layer.kernel == "paddle.add":
output_name = layer.outputs[0]
if layer.kernel == "paddle.nn.functional.conv2d":
conv_id = layer_id
for layer_id, layer in matches.items():
if layer.kernel == "paddle.nn.functional.conv2d":
layer.inputs["bias"] = bias_name
layer.attrs.pop("bias")
if not is_transpose:
layer.outputs[0] = output_name
if layer.kernel == "paddle.transpose":
if conv_id in graph.edges_in[layer_id]:
layer.outputs[0] = output_name
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from x2paddle.optimizer.pass_ import Pass
from x2paddle.optimizer.fusion.static import StaticPReLUFuser
from x2paddle.optimizer.pass_manager import pass_register
@pass_register
class StaticPReLUFusePass(Pass):
name = "static_prelu_fuse_pass"
def __init__(self):
Pass.__init__(self)
def apply(self, graph):
fuser = StaticPReLUFuser()
fuser.operate(graph, match_kind="edge")
# 用于注册
static_prelu_fuse_pass = StaticPReLUFusePass()
\ No newline at end of file
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import numpy as np
from collections import OrderedDict
from x2paddle.optimizer.pattern_matcher import FuseBase
from x2paddle.core.program import PaddleGraph, PaddleLayer
from x2paddle.core.util import *
class StaticPReLUFuser(FuseBase):
def __init__(self):
super(StaticPReLUFuser, self).__init__(graph_type="static")
def build_pattern(self):
""" 描述需要替换的prelu图结构。
prelu层模式python实现代码示例:
conv4_alphas = paddle.static.create_parameter(dtype='float32', shape=[128], name='conv4_alphas', default_initializer=paddle.nn.initializer.Constant(value=0.0))
conv4_mul_1_y = paddle.full(dtype='float32', shape=[1], fill_value=0.5)
conv4_Relu = paddle.nn.functional.relu(x=conv4_BiasAdd)
conv4_Abs = paddle.abs(x=conv4_BiasAdd)
conv4_sub = fluid.layers.elementwise_sub(x=conv4_BiasAdd, y=conv4_Abs)
conv4_mul = paddle.multiply(x=conv4_alphas, y=conv4_sub)
conv4_mul_1 = paddle.multiply(x=conv4_mul, y=conv4_mul_1_y)
conv4_add = paddle.add(x=conv4_Relu, y=conv4_mul_1)
"""
def gen_name(id):
return "x" + str(id)
self.pattern.add_layer(
"paddle.static.create_parameter",
inputs={},
outputs=[gen_name(0)])
self.pattern.add_layer(
"paddle.full",
inputs={},
outputs=[gen_name(1)],
shape=[1],
fill_value=0.5)
self.pattern.add_layer(
"paddle.nn.functional.relu",
inputs={"x": "prelu-input-0"},
outputs=[gen_name(2)])
self.pattern.add_layer(
"paddle.abs",
inputs={"x": "prelu-input-0"},
outputs=[gen_name(3)])
self.pattern.add_layer(
"fluid.layers.elementwise_sub",
inputs={"x": "prelu-input-0",
"y": gen_name(3)},
outputs=[gen_name(4)])
self.pattern.add_layer(
"paddle.multiply",
inputs={"x": gen_name(0),
"y": gen_name(4)},
outputs=[gen_name(5)])
self.pattern.add_layer(
"paddle.multiply",
inputs={"x": gen_name(5),
"y": gen_name(1)},
outputs=[gen_name(6)])
self.pattern.add_layer(
"paddle.add",
inputs={"x": gen_name(2),
"y": gen_name(6)},
outputs=[gen_name(7)])
self.pattern.build(inputs={"input-0": "prelu-input-0", })
def insert_new_layer(self, graph, parameters, matches):
new_layers, last_layer_id = self.gen_new_layer(matches, parameters, graph)
matches_copy = copy.deepcopy(matches)
for layer_id, layer in matches_copy.items():
for i in range(4):
if layer_id == new_layers[i].id:
matches.pop(new_layers[i].id)
prefix_layers = OrderedDict()
mid_layers = OrderedDict()
suffix_layers = OrderedDict()
is_need_id = False
for layer_id, layer in graph.layers.items():
if is_need_id:
suffix_layers[layer_id] = layer
else:
if layer_id == last_layer_id:
for i in range(4):
mid_layers[new_layers[i].id] = new_layers[i]
is_need_id = True
prefix_layers[layer_id] = layer
prefix_layers.update(mid_layers)
prefix_layers.update(suffix_layers)
graph.layers = prefix_layers
def gen_new_layer(self, matches, parameters, graph):
layer_id_list = list(matches.keys())
layer_id_list.sort(key = int)
for layer_id, layer in matches.items():
if layer.kernel == "paddle.nn.functional.relu":
input_name = layer.inputs["x"]
if layer.kernel == "paddle.static.create_parameter":
param_layer = layer
param_name = layer.outputs[0]
if layer.kernel == "paddle.add":
output_name = layer.outputs[0]
transpose0 = PaddleLayer(
id=layer_id_list[-1] + "_1",
kernel="paddle.transpose",
inputs={"x": input_name},
outputs=["{}_transpose_for_prelu".format(input_name)],
perm=[0, 3, 1, 2])
param = parameters[param_name]
c = param.shape[0]
prelu = PaddleLayer(id=layer_id_list[-1] + "_2",
kernel="paddle.nn.functional.prelu",
inputs={"x": "{}_transpose_for_prelu".format(input_name),
"weight": param_name},
outputs=["{}_prelu".format(input_name)])
transpose1 = PaddleLayer(
id=layer_id_list[-1] + "_3",
kernel="paddle.transpose",
inputs={"x": "{}_prelu".format(input_name)},
outputs=[output_name],
perm=[0, 2, 3, 1])
return [param_layer, transpose0, prelu, transpose1], layer_id_list[-1]
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from x2paddle.optimizer.pass_ import Pass
from x2paddle.optimizer.fusion.static import StaticTFBatchNormFuser
from x2paddle.optimizer.pass_manager import pass_register
@pass_register
class StaticTFBatchNormFusePass(Pass):
name = "static_tf_batchnorm_fuse_pass"
def __init__(self):
Pass.__init__(self)
def apply(self, graph):
fuser = StaticTFBatchNormFuser()
fuser.operate(graph, match_kind="edge")
# 用于注册
static_tf_batchnorm_fuse_pass = StaticTFBatchNormFusePass()
\ No newline at end of file
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import numpy as np
from collections import OrderedDict
from x2paddle.optimizer.pattern_matcher import FuseBase
from x2paddle.core.program import PaddleGraph, PaddleLayer
from x2paddle.core.util import *
class StaticTFBatchNormFuser(FuseBase):
def __init__(self):
super(StaticTFBatchNormFuser, self).__init__(graph_type="static")
self.patterns = list()
def build_pattern(self):
""" 描述需要替换的batchnorm图结构。
batchnorm层模式python实现代码示例:
"""
def gen_name(id):
return "x" + str(id)
pattern = PaddleGraph(graph_type="dygraph")
pattern.add_layer(
"paddle.static.create_parameter",
inputs={},
outputs=[gen_name(0)])
pattern.add_layer(
"paddle.full",
inputs={},
outputs=[gen_name(1)],
shape=[1])
pattern.add_layer(
"paddle.add",
inputs={"x": gen_name(0), "y": gen_name(1)},
outputs=[gen_name(2)])
pattern.add_layer(
"paddle.rsqrt",
inputs={"x": gen_name(2)},
outputs=[gen_name(3)])
pattern.add_layer(
"paddle.static.create_parameter",
inputs={},
outputs=[gen_name(4)])
pattern.add_layer(
"paddle.multiply",
inputs={"x": gen_name(3), "y": gen_name(4)},
outputs=[gen_name(5)])
pattern.add_layer(
"paddle.static.create_parameter",
inputs={},
outputs=[gen_name(6)])
pattern.add_layer(
"paddle.multiply",
inputs={"x": gen_name(6), "y": gen_name(5)},
outputs=[gen_name(7)])
pattern.add_layer(
"paddle.static.create_parameter",
inputs={},
outputs=[gen_name(8)])
pattern.add_layer(
"fluid.layers.elementwise_sub",
inputs={"x": gen_name(8), "y": gen_name(7)},
outputs=[gen_name(9)])
pattern.add_layer(
"paddle.multiply",
inputs={"x": "bn-input-0", "y": gen_name(5)},
outputs=[gen_name(10)])
pattern.add_layer(
"paddle.add",
inputs={"x": gen_name(10), "y": gen_name(9)},
outputs=[gen_name(11)])
pattern.build(inputs={"input-0": "bn-input-0", })
self.patterns.append(pattern)
pattern = PaddleGraph(graph_type="dygraph")
pattern.add_layer(
"paddle.static.create_parameter",
inputs={},
outputs=[gen_name(0)])
pattern.add_layer(
"paddle.full",
inputs={},
outputs=[gen_name(1)],
shape=[1])
pattern.add_layer(
"paddle.add",
inputs={"x": gen_name(0), "y": gen_name(1)},
outputs=[gen_name(2)])
pattern.add_layer(
"paddle.rsqrt",
inputs={"x": gen_name(2)},
outputs=[gen_name(3)])
pattern.add_layer(
"paddle.static.create_parameter",
inputs={},
outputs=[gen_name(4)])
pattern.add_layer(
"paddle.multiply",
inputs={"x": gen_name(3), "y": gen_name(4)},
outputs=[gen_name(5)])
pattern.add_layer(
"paddle.multiply",
inputs={"x": "bn-input-0", "y": gen_name(5)},
outputs=[gen_name(10)])
pattern.add_layer(
"paddle.static.create_parameter",
inputs={},
outputs=[gen_name(6)])
pattern.add_layer(
"paddle.multiply",
inputs={"x": gen_name(6), "y": gen_name(5)},
outputs=[gen_name(7)])
pattern.add_layer(
"paddle.static.create_parameter",
inputs={},
outputs=[gen_name(8)])
pattern.add_layer(
"fluid.layers.elementwise_sub",
inputs={"x": gen_name(8), "y": gen_name(7)},
outputs=[gen_name(9)])
pattern.add_layer(
"paddle.add",
inputs={"x": gen_name(10), "y": gen_name(9)},
outputs=[gen_name(11)])
pattern.build(inputs={"input-0": "bn-input-0", })
self.patterns.append(pattern)
def insert_new_layer(self, graph, parameters, matches):
new_layers, last_layer_id = self.gen_new_layer(matches, parameters, graph)
matches_copy = copy.deepcopy(matches)
for layer_id, layer in matches_copy.items():
for i in range(7):
if layer_id == new_layers[i].id:
matches.pop(new_layers[i].id)
prefix_layers = OrderedDict()
mid_layers = OrderedDict()
suffix_layers = OrderedDict()
is_need_id = False
for layer_id, layer in graph.layers.items():
if is_need_id:
suffix_layers[layer_id] = layer
else:
if layer_id == last_layer_id:
for i in range(7):
mid_layers[new_layers[i].id] = new_layers[i]
is_need_id = True
prefix_layers[layer_id] = layer
prefix_layers.update(mid_layers)
prefix_layers.update(suffix_layers)
graph.layers = prefix_layers
def gen_new_layer(self, matches, parameters, graph):
layer_id_list = list(matches.keys())
layer_id_list.sort(key = int)
for layer_id, layer in matches.items():
if layer.kernel == "paddle.full":
full_layer = layer
out_layer_id = graph.edges_out[layer_id][0]
if matches[out_layer_id].kernel == "paddle.add":
var_layer_id = graph.edges_in[out_layer_id][0]
var_layer = matches[var_layer_id]
if layer.kernel == "paddle.rsqrt":
out_layer_id = graph.edges_out[layer_id][0]
if matches[out_layer_id].kernel == "paddle.multiply":
gamma_layer_id = graph.edges_in[out_layer_id][1]
gamma_layer = matches[gamma_layer_id]
if layer.kernel == "fluid.layers.elementwise_sub":
in_layer_id = graph.edges_in[layer_id][0]
beta_layer = matches[in_layer_id]
in_layer_id = graph.edges_in[layer_id][1]
in_layer_id = graph.edges_in[in_layer_id][0]
mean_layer = matches[in_layer_id]
out_layer_id = graph.edges_out[layer_id][0]
add_layer = matches[out_layer_id]
if layer.kernel == "paddle.multiply":
in_layer_id = graph.edges_in[layer_id][1]
mul_layer = matches[in_layer_id]
if mul_layer.kernel == "paddle.multiply":
in_layer_id = graph.edges_in[layer_id][0]
if in_layer_id not in matches:
input_name = layer.inputs["x"]
transpose0 = PaddleLayer(
id=layer_id_list[-1] + "_1",
kernel="paddle.transpose",
inputs={"x": input_name},
outputs=["{}_transpose_for_bn".format(input_name)],
perm=[0, 3, 1, 2])
params = parameters[gamma_layer.outputs[0]]
c = params.shape[0]
bn = PaddleLayer(
id=layer_id_list[-1] + "_2",
kernel="paddle.nn.functional.batch_norm",
inputs={"x": "{}_transpose_for_bn".format(input_name),
"running_mean": mean_layer.outputs[0],
"running_var": var_layer.outputs[0],
"weight": gamma_layer.outputs[0],
"bias": beta_layer.outputs[0]},
outputs=["{}_bn".format(input_name)],
epsilon=full_layer.attrs["fill_value"])
transpose1 = PaddleLayer(
id=layer_id_list[-1] + "_3",
kernel="paddle.transpose",
inputs={"x": "{}_bn".format(input_name)},
outputs=add_layer.outputs,
perm=[0, 2, 3, 1])
mean_layer.id = layer_id_list[-1] + "_01"
var_layer.id = layer_id_list[-1] + "_02"
gamma_layer.id = layer_id_list[-1] + "_03"
beta_layer.id = layer_id_list[-1] + "_04"
return [mean_layer, var_layer, gamma_layer, beta_layer,
transpose0, bn, transpose1], layer_id_list[-1]
......@@ -16,13 +16,13 @@ from x2paddle.optimizer.pass_manager import PassManager
from x2paddle.optimizer.fusion.dygraph import *
from x2paddle.optimizer.fusion.static import *
from x2paddle.optimizer.elimination.dygraph import *
from x2paddle.optimizer.elimination.static import *
class GraphOptimizer(object):
def __init__(self, source_frame, paddle_type="dygraph", jit_type="trace"):
if source_frame == "pytorch":
if jit_type == "trace":
self.passes = ["dygraph_constant_fuse_pass",
"trace_fc_fuse_pass"]
self.passes = ["trace_fc_fuse_pass"]
else:
self.passes = [
"dygraph_constant_fuse_pass",
......@@ -39,12 +39,20 @@ class GraphOptimizer(object):
else:
self.passes = ["static_bn_scale_fuse_pass"]
elif source_frame == "tf":
if paddle_type == "dygraph":
self.passes = [
"dygraph_conv2d_add_fuse_pass",
"dygraph_tf_batchnorm_fuse_pass",
"dygraph_prelu_fuse_pass",
"transpose_eliminate_pass"
]
else:
self.passes = [
"static_conv2d_add_fuse_pass",
"static_tf_batchnorm_fuse_pass",
"static_prelu_fuse_pass",
"static_transpose_eliminate_pass"
]
else:
self.passes = []
......
import copy
from collections import OrderedDict
from x2paddle.core.program import PaddleLayer
class BatchNormOpt:
def __init__(self):
pass
def run(self, graph):
print("Optimize: BatchNormOpt...")
layers = copy.deepcopy(graph.layers)
for layer_id, layer in layers.items():
if layer.kernel != "fluid.layers.elementwise_add":
continue
axis = layer.attrs.get('axis', -1)
if axis != -1 and axis != 3:
continue
input_ids0 = graph.edges_in[layer_id]
mul_layer0 = graph.layers[input_ids0[0]]
sub_layer0 = graph.layers[input_ids0[1]]
if mul_layer0.kernel != "fluid.layers.elementwise_mul":
continue
if sub_layer0.kernel != "fluid.layers.elementwise_sub":
continue
axis = mul_layer0.attrs.get('axis', -1)
if axis != -1 and axis != 3:
continue
axis = sub_layer0.attrs.get('axis', -1)
if axis != -1 and axis != 0:
continue
if len(graph.edges_out.get(input_ids0[0], [])) != 1:
continue
if len(graph.edges_out.get(input_ids0[1], [])) != 1:
continue
input_ids1 = graph.edges_in[input_ids0[0]]
nhwc_input = graph.layers[input_ids1[0]]
mul_layer1 = graph.layers[input_ids1[1]]
if mul_layer1.kernel != "fluid.layers.elementwise_mul":
continue
axis = mul_layer1.attrs.get('axis', -1)
if axis != -1 and axis != 0:
continue
if len(graph.edges_out.get(input_ids1[1], [])) != 2:
continue
input_ids2 = graph.edges_in[input_ids0[1]]
beta = graph.layers[input_ids2[0]]
mul_layer2 = graph.layers[input_ids2[1]]
if beta.kernel != "fluid.layers.create_parameter":
continue
axis = mul_layer2.attrs.get('axis', -1)
if axis != -1 and axis != 0:
continue
if len(graph.edges_out.get(input_ids2[0], [])) != 1:
continue
if len(graph.edges_out.get(input_ids2[1], [])) != 1:
continue
if beta.outputs[0] not in graph.parameters:
continue
beta_shape = graph.parameters[beta.outputs[0]].shape
if len(beta_shape) != 1:
continue
input_ids3 = graph.edges_in[input_ids2[1]]
mean = graph.layers[input_ids3[0]]
mul_layer3 = graph.layers[input_ids3[1]]
if mean.kernel != "fluid.layers.create_parameter":
continue
axis = mul_layer3.attrs.get('axis', -1)
if axis != -1 and axis != 0:
continue
if len(graph.edges_out.get(input_ids3[0], [])) != 1:
continue
if len(graph.edges_out.get(input_ids3[1], [])) != 2:
continue
if mul_layer3.id != mul_layer1.id:
continue
if mean.outputs[0] not in graph.parameters:
continue
mean_shape = graph.parameters[mean.outputs[0]].shape
if mean_shape != beta_shape:
continue
input_ids4 = graph.edges_in[input_ids3[1]]
rsqrt_layer = graph.layers[input_ids4[0]]
gamma = graph.layers[input_ids4[1]]
if rsqrt_layer.kernel != "fluid.layers.rsqrt":
continue
if gamma.kernel != "fluid.layers.create_parameter":
continue
if len(graph.edges_out.get(input_ids4[0], [])) != 1:
continue
if len(graph.edges_out.get(input_ids4[1], [])) != 1:
continue
if gamma.outputs[0] not in graph.parameters:
continue
gamma_shape = graph.parameters[gamma.outputs[0]].shape
if gamma_shape != beta_shape:
continue
input_ids5 = graph.edges_in[input_ids4[0]]
add_layer = graph.layers[input_ids5[0]]
if add_layer.kernel != "fluid.layers.elementwise_add":
continue
axis = add_layer.attrs.get('axis', -1)
if axis != -1 and axis != 0:
continue
if len(graph.edges_out.get(input_ids5[0], [])) != 1:
continue
input_ids6 = graph.edges_in[input_ids5[0]]
variance = graph.layers[input_ids6[0]]
other = graph.layers[input_ids6[1]]
if variance.kernel != "fluid.layers.create_parameter":
continue
if other.kernel != "fluid.layers.fill_constant":
continue
if len(graph.edges_out.get(input_ids6[0], [])) != 1:
continue
if len(graph.edges_out.get(input_ids6[1], [])) != 1:
continue
if variance.outputs[0] not in graph.parameters:
continue
variance_shape = graph.parameters[variance.outputs[0]].shape
if variance_shape != beta_shape:
continue
ids = set([
layer_id, mul_layer0.id, sub_layer0.id, mul_layer1.id, beta.id,
mul_layer2.id, mean.id, mul_layer2.id, rsqrt_layer.id, gamma.id,
add_layer.id, variance.id, other.id
])
for id in ids:
del graph.layers[id]
if id in graph.edges_in:
del graph.edges_in[id]
if id in graph.edges_out:
del graph.edges_out[id]
copy_layers = copy.deepcopy(graph.layers)
graph.layers = OrderedDict()
for k, v in copy_layers.items():
if k != nhwc_input.id:
graph.layers[k] = v
continue
graph.layers[k] = v
transpose0 = PaddleLayer(
id='{}_1'.format(k),
kernel="fluid.layers.transpose",
inputs={"x": v.outputs[0]},
outputs=["transpose_for_bn"],
perm=[0, 3, 1, 2])
bn = PaddleLayer(
id='{}_2'.format(k),
kernel="fluid.layers.batch_norm",
inputs={"input": "transpose_for_bn"},
outputs=layer.outputs,
epsilon=other.attrs["value"],
param_attr="'{}'".format(gamma.outputs[0]),
bias_attr="'{}'".format(beta.outputs[0]),
moving_mean_name="'{}'".format(mean.outputs[0]),
moving_variance_name="'{}'".format(variance.outputs[0]))
transpose1 = PaddleLayer(
id=layer_id,
kernel="fluid.layers.transpose",
inputs={"x": layer.outputs[0]},
outputs=layer.outputs,
perm=[0, 2, 3, 1])
graph.layers[transpose0.id] = transpose0
graph.layers[bn.id] = bn
graph.layers[transpose1.id] = transpose1
graph.build()
import copy
class BiasOpt:
def __init__(self):
self.conv_layers = [
'fluid.layers.conv2d', 'fluid.layers.conv2d_transpose'
]
def run(self, graph):
print("Optimize: BiasOpt...")
layers = copy.deepcopy(graph.layers)
for layer_id, layer in layers.items():
if layer.kernel in self.conv_layers or layer.kernel == "fluid.layers.transpose":
if len(graph.edges_out.get(layer_id, [])) > 1:
continue
if layer.outputs[0] in graph.outputs:
continue
out_layer_id = graph.edges_out[layer_id][0]
if graph.layers[
out_layer_id].kernel != "fluid.layers.elementwise_add":
continue
if graph.layers[out_layer_id].attrs.get('axis', -1) != -1:
continue
in_layer_id = graph.edges_in[out_layer_id]
bias_layer_id = in_layer_id[1 - in_layer_id.index(layer_id)]
if graph.layers[
bias_layer_id].kernel != "fluid.layers.create_parameter":
continue
bias_layer = graph.layers[bias_layer_id]
if len(bias_layer.attrs['shape']) != 1:
continue
if len(graph.edges_out[bias_layer_id]) != 1:
continue
if layer.kernel == "fluid.layers.transpose":
if layer.attrs['perm'] != [0, 2, 3, 1]:
continue
in_layer_id = graph.edges_in[layer_id][0]
if graph.layers[in_layer_id].kernel not in self.conv_layers:
continue
if graph.layers[in_layer_id].attrs['bias_attr'] != False:
continue
if graph.layers[in_layer_id].outputs[0] in graph.outputs:
continue
if len(graph.edges_out[in_layer_id]) != 1:
continue
graph.layers[in_layer_id].attrs[
'bias_attr'] = bias_layer.attrs['name']
else:
graph.layers[layer_id].attrs[
'bias_attr'] = bias_layer.attrs['name']
bias_add_outs = graph.edges_out.get(out_layer_id, [])
bias_add_output = graph.layers[out_layer_id].outputs[0]
graph.del_layer(bias_layer_id)
graph.del_layer(out_layer_id)
for out in bias_add_outs:
for k, v in graph.layers[out].inputs.items():
if v == layer.outputs[0]:
graph.layers[out].inputs[k] = bias_add_output
graph.layers[layer_id].outputs[0] = bias_add_output
if layer.kernel == "fluid.layers.transpose":
in_layer_id = graph.edges_in[layer_id][0]
graph.layers[in_layer_id].outputs[0] = bias_add_output
graph.layers[layer_id].inputs['x'] = bias_add_output
import copy
import numpy as np
from collections import OrderedDict
from x2paddle.core.program import PaddleLayer
from x2paddle.core.util import *
class PReLUOpt:
def __init__(self):
pass
def run(self, graph):
print("Optimize: PReLUOpt...")
layers = copy.deepcopy(graph.layers)
for layer_id, layer in layers.items():
if layer.kernel != "fluid.layers.elementwise_add":
continue
axis = layer.attrs.get('axis', -1)
if axis != -1 and axis != 3:
continue
input_ids0 = graph.edges_in[layer_id]
relu_layer0 = graph.layers[input_ids0[0]]
mul_layer0 = graph.layers[input_ids0[1]]
if relu_layer0.kernel != "fluid.layers.relu":
continue
if mul_layer0.kernel != "fluid.layers.elementwise_mul":
continue
axis = mul_layer0.attrs.get('axis', -1)
if axis != -1 and axis != 3:
continue
if len(graph.edges_out.get(input_ids0[0], [])) != 1:
continue
if len(graph.edges_out.get(input_ids0[1], [])) != 1:
continue
input_ids1_0 = graph.edges_in[input_ids0[0]]
input_ids1_1 = graph.edges_in[input_ids0[1]]
fill_layer = graph.layers[input_ids1_1[1]]
mul_layer1 = graph.layers[input_ids1_1[0]]
if fill_layer.kernel != "fluid.layers.fill_constant":
continue
if mul_layer1.kernel != "fluid.layers.elementwise_mul":
continue
axis = mul_layer1.attrs.get('axis', -1)
if axis != -1 and axis != 0:
continue
if len(graph.edges_out.get(input_ids1_1[1], [])) != 1:
continue
if len(graph.edges_out.get(input_ids1_0[0], [])) != 3:
continue
input_ids2 = graph.edges_in[input_ids1_1[0]]
alpha = graph.layers[input_ids2[0]]
sub_layer = graph.layers[input_ids2[1]]
if alpha.kernel != "fluid.layers.create_parameter":
continue
if sub_layer.kernel != "fluid.layers.elementwise_sub":
continue
axis = sub_layer.attrs.get('axis', -1)
if axis != -1 and axis != 3:
continue
if len(graph.edges_out.get(input_ids2[0], [])) != 1:
continue
if len(graph.edges_out.get(input_ids2[1], [])) != 1:
continue
if alpha.outputs[0] not in graph.parameters:
continue
input_ids3 = graph.edges_in[input_ids2[1]]
add_layer = graph.layers[input_ids3[0]]
abs_layer = graph.layers[input_ids3[1]]
if abs_layer.kernel != "fluid.layers.abs":
continue
if len(graph.edges_out.get(input_ids3[1], [])) != 1:
continue
ids = set([
layer.id, relu_layer0.id, mul_layer0.id, fill_layer.id, mul_layer1.id, alpha.id,
sub_layer.id, abs_layer.id])
for id in ids:
del graph.layers[id]
if id in graph.edges_in:
del graph.edges_in[id]
if id in graph.edges_out:
del graph.edges_out[id]
copy_layers = copy.deepcopy(graph.layers)
graph.layers = OrderedDict()
for k, v in copy_layers.items():
if k != add_layer.id:
graph.layers[k] = v
continue
graph.layers[k] = v
transpose0 = PaddleLayer(
id='{}_1'.format(k),
kernel="fluid.layers.transpose",
inputs={"x": v.outputs[0]},
outputs=["transpose_for_prelu"],
perm=[0, 3, 1, 2])
prelu = PaddleLayer(
id='{}_2'.format(k),
kernel="fluid.layers.prelu",
inputs={"x": "transpose_for_prelu"},
outputs=layer.outputs,
mode=string("channel"),
param_attr="'{}'".format(alpha.outputs[0]))
transpose1 = PaddleLayer(
id=layer_id,
kernel="fluid.layers.transpose",
inputs={"x": layer.outputs[0]},
outputs=layer.outputs,
perm=[0, 2, 3, 1])
graph.layers[transpose0.id] = transpose0
graph.layers[prelu.id] = prelu
graph.layers[transpose1.id] = transpose1
first_axis = graph.parameters[alpha.outputs[0]].shape[0]
graph.parameters[alpha.outputs[0]] = np.reshape(graph.parameters[alpha.outputs[0]], (1, first_axis, 1, 1))
graph.build()
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册