未验证 提交 f909d314 编写于 作者: W WJJ1995 提交者: GitHub

Integrate Paddle-Lite opt tool (#681)

* Add bitwise ops

* update op_list.md

* add aten::format

* add opt support

* Add api.md

* rm aten op

* deal with comments

* fiexd readme

* fiexd readme

* fixed readme
上级 961e600d
......@@ -52,6 +52,7 @@ X2Paddle是飞桨生态下的模型转换工具,致力于帮助其它深度学
- tensorflow == 1.14 (如需转换TensorFlow模型)
- onnx >= 1.6.0 (如需转换ONNX模型)
- torch >= 1.5.0 (如需转换PyTorch模型)
- paddlelite == 2.9.0 (如需一键转换成Paddle-Lite支持格式)
### pip安装(推荐)
......@@ -100,6 +101,12 @@ x2paddle --framework=caffe --prototxt=deploy.prototxt --weight=deploy.caffemodel
| --model | 当framework为tensorflow/onnx时,该参数指定tensorflow的pb模型文件或onnx模型路径 |
| --caffe_proto | **[可选]** 由caffe.proto编译成caffe_pb2.py文件的存放路径,当存在自定义Layer时使用,默认为None |
| --define_input_shape | **[可选]** For TensorFlow, 当指定该参数时,强制用户输入每个Placeholder的shape,见[文档Q2](./docs/inference_model_convertor/FAQ.md) |
| --to_lite | **[可选]** 是否使用opt工具转成Paddle-Lite支持格式,默认为False |
| --lite_valid_places | **[可选]** 指定转换类型,可以同时指定多个backend(以逗号分隔),opt将会自动选择最佳方式,默认为arm |
| --lite_model_type | **[可选]** 指定模型转化类型,目前支持两种类型:protobuf和naive_buffer,默认为naive_buffer |
#### 一键转换Paddle-Lite支持格式
可参考[使用X2paddle导出Padde-Lite支持格式](docs/inference_model_convertor/convert2lite_api.md)
### 功能二:PyTorch模型训练迁移
......
# X2paddle Convert2Lite API
## 目录
* [x2paddle.convert.tf2paddle](#1)
* [x2paddle.convert.caffe2paddle](#2)
* [x2paddle.convert.onnx2paddle](#3)
* [x2paddle.convert.pytorch2paddle](#4)
## 使用X2paddle导出Padde-Lite支持格式
**背景**:如果想用Paddle-Lite运行第三方来源(TensorFlow、Caffe、ONNX、PyTorch)模型,一般需要经过两次转化。即使用X2paddle工具将第三方模型转化为PaddlePaddle格式,再使用opt将PaddlePaddle模型转化为Padde-Lite可支持格式。
**使用方法**:为了简化这一过程,X2Paddle集成了opt工具,提供一键转换功能,包括API以及命令行,以ONNX为例:
***API方式***
```python
from x2paddle.convert import onnx2paddle
onnx2paddle(model_path, save_dir,
convert_to_lite=True,
lite_valid_places="arm",
lite_model_type="naive_buffer")
# model_path(str)为ONNX模型路径
# save_dir(str)为转换后模型保存路径
# convert_to_lite(bool)表示是否使用opt工具,默认为False
# lite_valid_places(str)指定转换类型,默认为arm
# lite_model_type(str)指定模型转化类型,默认为naive_buffer
```
***命令行方式***
```shell
x2paddle --framework=onnx --model=onnx_model.onnx --save_dir=pd_model --to_lite=True --lite_valid_places=arm --lite_model_type=naive_buffer
```
TensorFlow、Caffe以及Pytorch模型转换参考如下API
## <h2 id="1">x2paddle.convert.tf2paddle</h2>
```python
x2paddle.convert.tf2paddle(model_path, save_dir, define_input_shape=False, convert_to_lite=False, lite_valid_places="arm", lite_model_type="naive_buffer")
```
> 转换TensorFlow模型。
> **参数**
>
> > - **model_path** (str): TensorFlow pb模型路径
> > - **save_dir** (str): 转换后模型保存路径
> > - **define_input_shape** (bool): 是否指定输入大小,默认为False
> > - **convert_to_lite** (bool): 是否使用opt工具转成Paddle-Lite支持格式,默认为False
> > - **lite_valid_places** (str): 指定转换类型,可以同时指定多个backend(以逗号分隔),opt将会自动选择最佳方式,默认为arm
> > - **lite_model_type** (str): 指定模型转化类型,目前支持两种类型:protobuf和naive_buffer,默认为naive_buffer
## <h2 id="2">x2paddle.convert.caffe2paddle</h2>
```python
x2paddle.convert.caffe2paddle(proto_file, weight_file, save_dir, caffe_proto, convert_to_lite=False, lite_valid_places="arm", lite_model_type="naive_buffer")
```
> 转换Caffe模型。
> **参数**
>
> > - **proto_file** (str): caffe模型的prototxt文件
> > - **weight_file** (str): caffe模型的权重文件
> > - **save_dir** (str): 转换后模型保存路径
> > - **caffe_proto** (str): 可选:由caffe.proto编译成caffe_pb2.py文件的存放路径,当存在自定义Layer时使用,默认为None
> > - **convert_to_lite** (bool): 是否使用opt工具转成Paddle-Lite支持格式,默认为False
> > - **lite_valid_places** (str): 指定转换类型,可以同时指定多个backend(以逗号分隔),opt将会自动选择最佳方式,默认为arm
> > - **lite_model_type** (str): 指定模型转化类型,目前支持两种类型:protobuf和naive_buffer,默认为naive_buffer
## <h2 id="3">x2paddle.convert.onnx2paddle</h2>
```python
x2paddle.convert.onnx2paddle(model_path, save_dir, convert_to_lite=False, lite_valid_places="arm", lite_model_type="naive_buffer")
```
> 转换ONNX模型。
> **参数**
>
> > - **model_path** (str): TensorFlow pb模型路径
> > - **save_dir** (str): 转换后模型保存路径
> > - **convert_to_lite** (bool): 是否使用opt工具转成Paddle-Lite支持格式,默认为False
> > - **lite_valid_places** (str): 指定转换类型,可以同时指定多个backend(以逗号分隔),opt将会自动选择最佳方式,默认为arm
> > - **lite_model_type** (str): 指定模型转化类型,目前支持两种类型:protobuf和naive_buffer,默认为naive_buffer
## <h2 id="4">x2paddle.convert.pytorch2paddle</h2>
```python
x2paddle.convert.pytorch2paddle(module, save_dir, jit_type="trace", input_examples=None, convert_to_lite=False, lite_valid_places="arm", lite_model_type="naive_buffer")
```
> 转换Pytorch模型。
> **参数**
>
> > - **module** (torch.nn.Module): PyTorch的Module
> > - **save_dir** (str): 转换后模型保存路径
> > - **jit_type** (str): 转换方式。目前有两种:trace和script,默认为trace
> > - **input_examples** (list[torch.tensor]): torch.nn.Module的输入示例,list的长度必须与输入的长度一致。默认为None。
> > - **convert_to_lite** (bool): 是否使用opt工具转成Paddle-Lite支持格式,默认为False
> > - **lite_valid_places** (str): 指定转换类型,可以同时指定多个backend(以逗号分隔),opt将会自动选择最佳方式,默认为arm
> > - **lite_model_type** (str): 指定模型转化类型,目前支持两种类型:protobuf和naive_buffer,默认为naive_buffer
......@@ -87,11 +87,44 @@ def arg_parser():
type=_text_type,
default=None,
help="pretrain model file of pytorch model")
parser.add_argument(
"--to_lite", "-tl", default=False, help="convert to Paddle-Lite format")
parser.add_argument(
"--lite_valid_places",
"-vp",
type=_text_type,
default="arm",
help="Specify the executable backend of the model")
parser.add_argument(
"--lite_model_type",
"-mt",
type=_text_type,
default="naive_buffer",
help="The type of lite model")
return parser
def tf2paddle(model_path, save_dir, define_input_shape=False):
def convert2lite(save_dir,
lite_valid_places="arm",
lite_model_type="naive_buffer"):
"""Convert to Paddle-Lite format."""
from paddlelite.lite import Opt
opt = Opt()
opt.set_model_dir(save_dir + "/inference_model")
opt.set_valid_places(lite_valid_places)
opt.set_model_type(lite_model_type)
opt.set_optimize_out(save_dir + "/opt")
opt.run()
def tf2paddle(model_path,
save_dir,
define_input_shape=False,
convert_to_lite=False,
lite_valid_places="arm",
lite_model_type="naive_buffer"):
# check tensorflow installation and version
try:
import os
......@@ -120,9 +153,17 @@ def tf2paddle(model_path, save_dir, define_input_shape=False):
graph_opt = GraphOptimizer(source_frame="tf")
graph_opt.optimize(mapper.paddle_graph)
mapper.paddle_graph.gen_model(save_dir)
if convert_to_lite:
convert2lite(save_dir, lite_valid_places, lite_model_type)
def caffe2paddle(proto, weight, save_dir, caffe_proto):
def caffe2paddle(proto_file,
weight_file,
save_dir,
caffe_proto,
convert_to_lite=False,
lite_valid_places="arm",
lite_model_type="naive_buffer"):
from x2paddle.decoder.caffe_decoder import CaffeDecoder
from x2paddle.op_mapper.caffe2paddle.caffe_op_mapper import CaffeOpMapper
import google.protobuf as gpb
......@@ -133,7 +174,7 @@ def caffe2paddle(proto, weight, save_dir, caffe_proto):
version_satisfy = True
assert version_satisfy, '[ERROR] google.protobuf >= 3.6.0 is required'
print("Now translating model from caffe to paddle.")
model = CaffeDecoder(proto, weight, caffe_proto)
model = CaffeDecoder(proto_file, weight_file, caffe_proto)
mapper = CaffeOpMapper(model)
mapper.paddle_graph.build()
print("Model optimizing ...")
......@@ -142,9 +183,15 @@ def caffe2paddle(proto, weight, save_dir, caffe_proto):
graph_opt.optimize(mapper.paddle_graph)
print("Model optimized.")
mapper.paddle_graph.gen_model(save_dir)
if convert_to_lite:
convert2lite(save_dir, lite_valid_places, lite_model_type)
def onnx2paddle(model_path, save_dir):
def onnx2paddle(model_path,
save_dir,
convert_to_lite=False,
lite_valid_places="arm",
lite_model_type="naive_buffer"):
# check onnx installation and version
try:
import onnx
......@@ -165,9 +212,17 @@ def onnx2paddle(model_path, save_dir):
mapper = ONNXOpMapper(model)
mapper.paddle_graph.build()
mapper.paddle_graph.gen_model(save_dir)
if convert_to_lite:
convert2lite(save_dir, lite_valid_places, lite_model_type)
def pytorch2paddle(module, save_dir, jit_type="trace", input_examples=None):
def pytorch2paddle(module,
save_dir,
jit_type="trace",
input_examples=None,
convert_to_lite=False,
lite_valid_places="arm",
lite_model_type="naive_buffer"):
# check pytorch installation and version
try:
import torch
......@@ -199,6 +254,8 @@ def pytorch2paddle(module, save_dir, jit_type="trace", input_examples=None):
graph_opt.optimize(mapper.paddle_graph)
print("Model optimized.")
mapper.paddle_graph.gen_model(save_dir, jit_type=jit_type)
if convert_to_lite:
convert2lite(save_dir, lite_valid_places, lite_model_type)
def main():
......@@ -250,15 +307,32 @@ def main():
define_input_shape = False
if args.define_input_shape:
define_input_shape = True
tf2paddle(args.model, args.save_dir, define_input_shape)
tf2paddle(
args.model,
args.save_dir,
define_input_shape,
convert_to_lite=args.to_lite,
lite_valid_places=args.lite_valid_places,
lite_model_type=args.lite_model_type)
elif args.framework == "caffe":
assert args.prototxt is not None and args.weight is not None, "--prototxt and --weight should be defined while translating caffe model"
caffe2paddle(args.prototxt, args.weight, args.save_dir,
args.caffe_proto)
caffe2paddle(
args.prototxt,
args.weight,
args.save_dir,
args.caffe_proto,
convert_to_lite=args.to_lite,
lite_valid_places=args.lite_valid_places,
lite_model_type=args.lite_model_type)
elif args.framework == "onnx":
assert args.model is not None, "--model should be defined while translating onnx model"
onnx2paddle(args.model, args.save_dir)
onnx2paddle(
args.model,
args.save_dir,
convert_to_lite=args.to_lite,
lite_valid_places=args.lite_valid_places,
lite_model_type=args.lite_model_type)
elif args.framework == "paddle2onnx":
print(
"Paddle to ONNX tool has been migrated to the new github: https://github.com/PaddlePaddle/paddle2onnx"
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册