提交 fe6e66bd 编写于 作者: M mindspore-ci-bot 提交者: Gitee

!75 Instructions for exporting GEIR model and ONNX model

Merge pull request !75 from 于振华/master
......@@ -9,6 +9,7 @@
- [Loading Model Parameters](#loading-model-parameters)
- [For Inference Validation](#for-inference-validation)
- [For Retraining](#for-retraining)
- [Export GEIR Model and ONNX Model](#export-geir-model-and-onnx-model)
<!-- /TOC -->
......@@ -135,3 +136,21 @@ model.train(epoch, dataset)
The `load_checkpoint` method returns a parameter dictionary and then the `load_param_into_net` method loads parameters in the parameter dictionary to the network or optimizer.
## Export GEIR Model and ONNX Model
When you have a CheckPoint file, if you want to do inference, you need to generate corresponding models based on the network and CheckPoint.
Currently we support the export of GEIR models based on Ascend AI processor and the export of ONNX models based on GPU. Taking the export of GEIR model as an example to illustrate the implementation of model export,
the code is as follows:
```python
from mindspore.train.serialization import export
import numpy as np
net = ResNet50()
# return a parameter dict for model
param_dict = load_checkpoint("resnet50-2_32.ckpt", net=resnet)
# load the parameter into net
load_param_into_net(net)
input = np.random.uniform(0.0, 1.0, size = [32, 3, 224, 224]).astype(np.float32)
export(net, input, file_name = 'resnet50-2_32.pb', file_format = 'GEIR')
```
Before using the `export` interface, you need to import` mindspore.train.serialization`.
The `input` parameter is used to specify the input shape and data type of the exported model.
If you want to export the ONNX model, you only need to specify the `file_format` parameter in the` export` interface as ONNX: `file_format = 'ONNX'`.
\ No newline at end of file
......@@ -9,6 +9,7 @@
- [模型参数加载](#模型参数加载)
- [用于推理验证](#用于推理验证)
- [用于再训练场景](#用于再训练场景)
- [导出GEIR模型和ONNX模型](#导出geir模型和onnx模型)
<!-- /TOC -->
......@@ -135,3 +136,21 @@ model.train(epoch, dataset)
```
`load_checkpoint`方法会返回一个参数字典,`load_param_into_net`会把参数字典中相应的参数加载到网络或优化器中。
## 导出GEIR模型和ONNX模型
当有了CheckPoint文件后,如果想继续做推理,就需要根据网络和CheckPoint生成对应的模型,当前我们支持基于昇腾AI处理器的GEIR模型导出和基于GPU的通用ONNX模型的导出。
下面以GEIR为例说明模型导出的实现,代码如下:
```python
from mindspore.train.serialization import export
import numpy as np
net = ResNet50()
# return a parameter dict for model
param_dict = load_checkpoint("resnet50-2_32.ckpt", net=resnet)
# load the parameter into net
load_param_into_net(net)
input = np.random.uniform(0.0, 1.0, size = [32, 3, 224, 224]).astype(np.float32)
export(net, input, file_name = 'resnet50-2_32.pb', file_format = 'GEIR')
```
使用`export`接口之前,需要先导入`mindspore.train.serialization`
`input`用来指定导出模型的输入shape以及数据类型。
如果要导出ONNX模型,只需要将`export`接口中的`file_format`参数指定为ONNX即可:`file_format = 'ONNX'`
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册