diff --git a/tutorials/source_en/use/saving_and_loading_model_parameters.md b/tutorials/source_en/use/saving_and_loading_model_parameters.md index 7cad248fb8bfcd6ed9d48410108d19c2047bd91d..ba46b8a89a72720b6771bb073133b8abd525039e 100644 --- a/tutorials/source_en/use/saving_and_loading_model_parameters.md +++ b/tutorials/source_en/use/saving_and_loading_model_parameters.md @@ -9,6 +9,7 @@ - [Loading Model Parameters](#loading-model-parameters) - [For Inference Validation](#for-inference-validation) - [For Retraining](#for-retraining) + - [Export GEIR Model and ONNX Model](#export-geir-model-and-onnx-model) @@ -135,3 +136,21 @@ model.train(epoch, dataset) The `load_checkpoint` method returns a parameter dictionary and then the `load_param_into_net` method loads parameters in the parameter dictionary to the network or optimizer. +## Export GEIR Model and ONNX Model +When you have a CheckPoint file, if you want to do inference, you need to generate corresponding models based on the network and CheckPoint. +Currently we support the export of GEIR models based on Ascend AI processor and the export of ONNX models based on GPU. Taking the export of GEIR model as an example to illustrate the implementation of model export, +the code is as follows: +```python +from mindspore.train.serialization import export +import numpy as np +net = ResNet50() +# return a parameter dict for model +param_dict = load_checkpoint("resnet50-2_32.ckpt", net=resnet) +# load the parameter into net +load_param_into_net(net) +input = np.random.uniform(0.0, 1.0, size = [32, 3, 224, 224]).astype(np.float32) +export(net, input, file_name = 'resnet50-2_32.pb', file_format = 'GEIR') +``` +Before using the `export` interface, you need to import` mindspore.train.serialization`. +The `input` parameter is used to specify the input shape and data type of the exported model. +If you want to export the ONNX model, you only need to specify the `file_format` parameter in the` export` interface as ONNX: `file_format = 'ONNX'`. \ No newline at end of file diff --git a/tutorials/source_zh_cn/use/saving_and_loading_model_parameters.md b/tutorials/source_zh_cn/use/saving_and_loading_model_parameters.md index 1bd0c24b7c33f1b6a9300f00dccc4688d53b6033..50f901b30a83ae3f9cecbbb97d502b775d7ea757 100644 --- a/tutorials/source_zh_cn/use/saving_and_loading_model_parameters.md +++ b/tutorials/source_zh_cn/use/saving_and_loading_model_parameters.md @@ -9,6 +9,7 @@ - [模型参数加载](#模型参数加载) - [用于推理验证](#用于推理验证) - [用于再训练场景](#用于再训练场景) + - [导出GEIR模型和ONNX模型](#导出geir模型和onnx模型) @@ -135,3 +136,21 @@ model.train(epoch, dataset) ``` `load_checkpoint`方法会返回一个参数字典,`load_param_into_net`会把参数字典中相应的参数加载到网络或优化器中。 + +## 导出GEIR模型和ONNX模型 +当有了CheckPoint文件后,如果想继续做推理,就需要根据网络和CheckPoint生成对应的模型,当前我们支持基于昇腾AI处理器的GEIR模型导出和基于GPU的通用ONNX模型的导出。 +下面以GEIR为例说明模型导出的实现,代码如下: +```python +from mindspore.train.serialization import export +import numpy as np +net = ResNet50() +# return a parameter dict for model +param_dict = load_checkpoint("resnet50-2_32.ckpt", net=resnet) +# load the parameter into net +load_param_into_net(net) +input = np.random.uniform(0.0, 1.0, size = [32, 3, 224, 224]).astype(np.float32) +export(net, input, file_name = 'resnet50-2_32.pb', file_format = 'GEIR') +``` +使用`export`接口之前,需要先导入`mindspore.train.serialization`。 +`input`用来指定导出模型的输入shape以及数据类型。 +如果要导出ONNX模型,只需要将`export`接口中的`file_format`参数指定为ONNX即可:`file_format = 'ONNX'`。 \ No newline at end of file