提交 472e6c63 编写于 作者: F fary86

Change GEIR to AIR

上级 e52c8a95
......@@ -25,7 +25,7 @@
| FP16 | 16-bit floating point, which is a half-precision floating point arithmetic format, consuming less memory. |
| FP32 | 32-bit floating point, which is a single-precision floating point arithmetic format. |
| GE | Graph Engine, MindSpore computational graph execution engine, which is responsible for optimizing hardware (such as operator fusion and memory overcommitment) based on the front-end computational graph and starting tasks on the device side. |
| GEIR | Graph Engine Intermediate Representation, such as ONNX, it is an open file format for machine learning. It is defined by Huawei and is better suited to Ascend AI processor.|
| AIR | Ascend Intermediate Representation, such as ONNX, it is an open file format for machine learning. It is defined by Huawei and is better suited to Ascend AI processor.|
| GHLO | Graph High Level Optimization. GHLO includes optimization irrelevant to hardware (such as dead code elimination), auto parallel, and auto differentiation. |
| GLLO | Graph Low Level Optimization. GLLO includes hardware-related optimization and in-depth optimization related to the combination of hardware and software, such as operator fusion and buffer fusion. |
| Graph Mode | MindSpore static graph mode. In this mode, the neural network model is compiled into an entire graph and then delivered for execution, featuring high performance. |
......
......@@ -25,7 +25,7 @@
| FP16 | 16位浮点,半精度浮点算术,消耗更小内存。 |
| FP32 | 32位浮点,单精度浮点算术。 |
| GE | Graph Engine,MindSpore计算图执行引擎,主要负责根据前端的计算图完成硬件相关的优化(算子融合、内存复用等等)、device侧任务启动。 |
| GEIR | Graph Engine Intermediate Representation,类似ONNX,是华为定义的针对机器学习所设计的开放式的文件格式,能更好地适配Ascend AI处理器。|
| AIR | Ascend Intermediate Representation,类似ONNX,是华为定义的针对机器学习所设计的开放式的文件格式,能更好地适配Ascend AI处理器。|
| GHLO | Graph High Level Optimization,计算图高级别优化。GHLO包含硬件无关的优化(如死代码消除等)、自动并行和自动微分等功能。 |
| GLLO | Graph Low Level Optimization,计算图低级别优化。GLLO包含硬件相关的优化,以及算子融合、Buffer融合等软硬件结合相关的深度优化。 |
| Graph Mode | MindSpore的静态图模式,将神经网络模型编译成一整张图,然后下发执行,性能高。 |
......@@ -48,4 +48,4 @@
| Schema | 数据集结构定义文件,用于定义数据集包含哪些字段以及字段的类型。 |
| Summary | 是对网络中Tensor取值进行监测的一种算子,在图中是“外围”操作,不影响数据流本身。 |
| TBE | Tensor Boost Engine,在TVM( Tensor Virtual Machine )框架基础上扩展的算子开发工具。 |
| TFRecord | Tensorflow定义的数据格式。 |
\ No newline at end of file
| TFRecord | Tensorflow定义的数据格式。 |
......@@ -112,7 +112,7 @@ To perform on-device model inference using MindSpore, perform the following step
```
2. Call the `export` API to export the `.pb` model file on the device.
```python
export(net, input_data, file_name="./lenet.pb", file_format='BINARY')
export(net, input_data, file_name="./lenet.mindir", file_format='MINDIR')
```
Take the LeNet network as an example. The generated on-device model file is `lenet.pb`. The complete sample code `lenet.py` is as follows:
```python
......@@ -166,7 +166,7 @@ To perform on-device model inference using MindSpore, perform the following step
if is_ckpt_exist:
param_dict = load_checkpoint(ckpt_file_name=ckpt_file_path)
load_param_into_net(net, param_dict)
export(net, input_data, file_name="./lenet.pb", file_format='BINARY')
export(net, input_data, file_name="./lenet.mindir", file_format='MINDIR')
print("export model success.")
else:
print("checkpoint file does not exist.")
......@@ -355,4 +355,4 @@ The complete sample code is as follows:
}
```
\ No newline at end of file
......@@ -179,7 +179,7 @@ The preceding describes the quantization aware training from scratch. A more com
### Inference
The inference using a quantization model is the same as common model inference. The inference can be performed by directly using the checkpoint file or converting the checkpoint file into a common model format (such as ONNX or GEIR).
The inference using a quantization model is the same as common model inference. The inference can be performed by directly using the checkpoint file or converting the checkpoint file into a common model format (such as ONNX or AIR).
For details, see <https://www.mindspore.cn/tutorial/en/master/use/multi_platform_inference.html>.
......
......@@ -7,7 +7,7 @@
- [Inference on the Ascend 910 AI processor](#inference-on-the-ascend-910-ai-processor)
- [Inference Using a Checkpoint File](#inference-using-a-checkpoint-file)
- [Inference on the Ascend 310 AI processor](#inference-on-the-ascend-310-ai-processor)
- [Inference Using an ONNX or GEIR File](#inference-using-an-onnx-or-geir-file)
- [Inference Using an ONNX or AIR File](#inference-using-an-onnx-or-air-file)
- [Inference on a GPU](#inference-on-a-gpu)
- [Inference Using a Checkpoint File](#inference-using-a-checkpoint-file-1)
- [Inference Using an ONNX File](#inference-using-an-onnx-file)
......@@ -26,14 +26,14 @@ Models trained by MindSpore support the inference on different hardware platform
The inference can be performed in either of the following methods based on different principles:
- Use a checkpoint file for inference. That is, use the inference API to load data and the checkpoint file for inference in the MindSpore training environment.
- Convert the checkpiont file into a common model format, such as ONNX or GEIR, for inference. The inference environment does not depend on MindSpore. In this way, inference can be performed across hardware platforms as long as the platform supports ONNX or GEIR inference. For example, models trained on the Ascend 910 AI processor can be inferred on the GPU or CPU.
- Convert the checkpiont file into a common model format, such as ONNX or AIR, for inference. The inference environment does not depend on MindSpore. In this way, inference can be performed across hardware platforms as long as the platform supports ONNX or AIR inference. For example, models trained on the Ascend 910 AI processor can be inferred on the GPU or CPU.
MindSpore supports the following inference scenarios based on the hardware platform:
| Hardware Platform | Model File Format | Description |
| ----------------------- | ----------------- | ---------------------------------------- |
| Ascend 910 AI processor | Checkpoint | The training environment dependency is the same as that of MindSpore. |
| Ascend 310 AI processor | ONNX or GEIR | Equipped with the ACL framework and supports the model in OM format. You need to use a tool to convert a model into the OM format. |
| Ascend 310 AI processor | ONNX or AIR | Equipped with the ACL framework and supports the model in OM format. You need to use a tool to convert a model into the OM format. |
| GPU | Checkpoint | The training environment dependency is the same as that of MindSpore. |
| GPU | ONNX | Supports ONNX Runtime or SDK, for example, TensorRT. |
| CPU | Checkpoint | The training environment dependency is the same as that of MindSpore. |
......@@ -41,7 +41,7 @@ MindSpore supports the following inference scenarios based on the hardware platf
> Open Neural Network Exchange (ONNX) is an open file format designed for machine learning. It is used to store trained models. It enables different AI frameworks (such as PyTorch and MXNet) to store model data in the same format and interact with each other. For details, visit the ONNX official website <https://onnx.ai/>.
> Graph Engine Intermediate Representation (GEIR) is an open file format defined by Huawei for machine learning and can better adapt to the Ascend AI processor. It is similar to ONNX.
> Ascend Intermediate Representation (AIR) is an open file format defined by Huawei for machine learning and can better adapt to the Ascend AI processor. It is similar to ONNX.
> Ascend Computer Language (ACL) provides C++ API libraries for users to develop deep neural network applications, including device management, context management, stream management, memory management, model loading and execution, operator loading and execution, and media data processing. It matches the Ascend AI processor and enables hardware running management and resource management.
......@@ -110,13 +110,13 @@ MindSpore supports the following inference scenarios based on the hardware platf
## Inference on the Ascend 310 AI processor
### Inference Using an ONNX or GEIR File
### Inference Using an ONNX or AIR File
The Ascend 310 AI processor is equipped with the ACL framework and supports the OM format which needs to be converted from the model in ONNX or GEIR format. For inference on the Ascend 310 AI processor, perform the following steps:
The Ascend 310 AI processor is equipped with the ACL framework and supports the OM format which needs to be converted from the model in ONNX or AIR format. For inference on the Ascend 310 AI processor, perform the following steps:
1. Generate a model in ONNX or GEIR format on the training platform. For details, see [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx).
1. Generate a model in ONNX or AIR format on the training platform. For details, see [Export AIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#aironnx).
2. Convert the ONNX or GEIR model file into an OM model file and perform inference.
2. Convert the ONNX or AIR model file into an OM model file and perform inference.
- For performing inference in the cloud environment (ModelArt), see the [Ascend 910 training and Ascend 310 inference samples](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html).
- For details about the local bare-metal environment where the Ascend 310 AI processor is deployed locally (compared with the cloud environment), see the document of the Ascend 310 AI processor software package.
......@@ -128,7 +128,7 @@ The inference is the same as that on the Ascend 910 AI processor.
### Inference Using an ONNX File
1. Generate a model in ONNX format on the training platform. For details, see [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx).
1. Generate a model in ONNX format on the training platform. For details, see [Export AIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#aironnx).
2. Perform inference on a GPU by referring to the runtime or SDK document. For example, use TensorRT to perform inference on the NVIDIA GPU. For details, see [TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt).
......@@ -140,7 +140,7 @@ The inference is the same as that on the Ascend 910 AI processor.
### Inference Using an ONNX File
Similar to the inference on a GPU, the following steps are required:
1. Generate a model in ONNX format on the training platform. For details, see [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx).
1. Generate a model in ONNX format on the training platform. For details, see [Export AIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#aironnx).
2. Perform inference on a CPU by referring to the runtime or SDK document. For details about how to use the ONNX Runtime, see the [ONNX Runtime document](https://github.com/microsoft/onnxruntime).
......
......@@ -9,7 +9,7 @@
- [Loading Model Parameters](#loading-model-parameters)
- [For Inference Validation](#for-inference-validation)
- [For Retraining](#for-retraining)
- [Export GEIR Model and ONNX Model](#export-geir-model-and-onnx-model)
- [Export AIR Model and ONNX Model](#export-air-model-and-onnx-model)
<!-- /TOC -->
......@@ -140,9 +140,9 @@ model.train(epoch, dataset)
The `load_checkpoint` method returns a parameter dictionary and then the `load_param_into_net` method loads parameters in the parameter dictionary to the network or optimizer.
## Export GEIR Model and ONNX Model
## Export AIR Model and ONNX Model
When you have a CheckPoint file, if you want to do inference, you need to generate corresponding models based on the network and CheckPoint.
Currently we support the export of GEIR models based on Ascend AI processor and the export of ONNX models based on GPU. Taking the export of GEIR model as an example to illustrate the implementation of model export,
Currently we support the export of AIR models based on Ascend AI processor and the export of ONNX models. Taking the export of AIR model as an example to illustrate the implementation of model export,
the code is as follows:
```python
from mindspore.train.serialization import export
......@@ -153,8 +153,26 @@ param_dict = load_checkpoint("resnet50-2_32.ckpt")
# load the parameter into net
load_param_into_net(resnet, param_dict)
input = np.random.uniform(0.0, 1.0, size = [32, 3, 224, 224]).astype(np.float32)
export(resnet, Tensor(input), file_name = 'resnet50-2_32.pb', file_format = 'GEIR')
export(resnet, Tensor(input), file_name = 'resnet50-2_32.pb', file_format = 'AIR')
```
Before using the `export` interface, you need to import` mindspore.train.serialization`.
The `input` parameter is used to specify the input shape and data type of the exported model.
If you want to export the ONNX model, you only need to specify the `file_format` parameter in the` export` interface as ONNX: `file_format = 'ONNX'`.
## Export MINDIR Model
If you want to do inference on the device, then you need to generate corresponding MINDIR models based on the network and CheckPoint.
Currently we support the export of MINDIR models for inference based on graph mode, which don't contain control flow. Taking the export of MINDIR model as an example to illustrate the implementation of model export,
the code is as follows:
```python
from mindspore.train.serialization import export
import numpy as np
resnet = ResNet50()
# return a parameter dict for model
param_dict = load_checkpoint("resnet50-2_32.ckpt")
# load the parameter into net
load_param_into_net(resnet, param_dict)
input = np.random.uniform(0.0, 1.0, size = [32, 3, 224, 224]).astype(np.float32)
export(resnet, Tensor(input), file_name = 'resnet50-2_32.mindir', file_format = 'MINDIR')
```
Before using the `export` interface, you need to import` mindspore.train.serialization`.
The `input` parameter is used to specify the input shape and data type of the exported model.
......@@ -112,7 +112,7 @@ MindSpore进行端侧模型推理的步骤如下。
```
2. 调用`export`接口,导出模型文件(`.pb`)。
```python
export(net, input_data, file_name="./lenet.pb", file_format='BINARY')
export(net, input_data, file_name="./lenet.mindir", file_format='MINDIR')
```
以LeNet网络为例,生成的端侧模型文件为`lenet.pb`,完整示例代码`lenet.py`如下。
......@@ -167,7 +167,7 @@ MindSpore进行端侧模型推理的步骤如下。
if is_ckpt_exist:
param_dict = load_checkpoint(ckpt_file_name=ckpt_file_path)
load_param_into_net(net, param_dict)
export(net, input_data, file_name="./lenet.pb", file_format='BINARY')
export(net, input_data, file_name="./lenet.mindir", file_format='MINDIR')
print("export model success.")
else:
print("checkpoint file does not exist.")
......
......@@ -179,7 +179,7 @@ net = qat.convert_quant_network(net, quant_delay=0, bn_fold=False, freeze_bn=100
### 进行推理
使用量化模型进行推理,与普通模型推理一致,分为直接checkpoint文件推理及转化为通用模型格式(ONNX、GEIR等)进行推理。
使用量化模型进行推理,与普通模型推理一致,分为直接checkpoint文件推理及转化为通用模型格式(ONNX、AIR等)进行推理。
> 推理详细说明请参见<https://www.mindspore.cn/tutorial/zh-CN/master/use/multi_platform_inference.html>。
......
......@@ -7,7 +7,7 @@
- [Ascend 910 AI处理器上推理](#ascend-910-ai处理器上推理)
- [使用checkpoint格式文件推理](#使用checkpoint格式文件推理)
- [Ascend 310 AI处理器上推理](#ascend-310-ai处理器上推理)
- [使用ONNX与GEIR格式文件推理](#使用onnx与geir格式文件推理)
- [使用ONNX与AIR格式文件推理](#使用onnx与air格式文件推理)
- [GPU上推理](#gpu上推理)
- [使用checkpoint格式文件推理](#使用checkpoint格式文件推理-1)
- [使用ONNX格式文件推理](#使用onnx格式文件推理)
......@@ -26,14 +26,14 @@
按照原理不同,推理可以有两种方式:
- 直接使用checkpiont文件进行推理,即在MindSpore训练环境下,使用推理接口加载数据及checkpoint文件进行推理。
- 将checkpiont文件转化为通用的模型格式,如ONNX、GEIR格式模型文件进行推理,推理环境不需要依赖MindSpore。这样的好处是可以跨硬件平台,只要支持ONNX/GEIR推理的硬件平台即可进行推理。譬如在Ascend 910 AI处理器上训练的模型,可以在GPU/CPU上进行推理。
- 将checkpiont文件转化为通用的模型格式,如ONNX、AIR格式模型文件进行推理,推理环境不需要依赖MindSpore。这样的好处是可以跨硬件平台,只要支持ONNX/AIR推理的硬件平台即可进行推理。譬如在Ascend 910 AI处理器上训练的模型,可以在GPU/CPU上进行推理。
MindSpore支持的推理场景,按照硬件平台维度可以分为下面几种:
硬件平台 | 模型文件格式 | 说明
--|--|--
Ascend 910 AI处理器 | checkpoint格式 | 与MindSpore训练环境依赖一致
Ascend 310 AI处理器 | ONNX、GEIR格式 | 搭载了ACL框架,支持OM格式模型,需要使用工具转化模型为OM格式模型。
Ascend 310 AI处理器 | ONNX、AIR格式 | 搭载了ACL框架,支持OM格式模型,需要使用工具转化模型为OM格式模型。
GPU | checkpoint格式 | 与MindSpore训练环境依赖一致。
GPU | ONNX格式 | 支持ONNX推理的runtime/SDK,如TensorRT。
CPU | checkpoint格式 | 与MindSpore训练环境依赖一致。
......@@ -41,7 +41,7 @@ CPU | ONNX格式 | 支持ONNX推理的runtime/SDK,如TensorRT。
> ONNX,全称Open Neural Network Exchange,是一种针对机器学习所设计的开放式的文件格式,用于存储训练好的模型。它使得不同的人工智能框架(如PyTorch, MXNet)可以采用相同格式存储模型数据并交互。详细了解,请参见ONNX官网<https://onnx.ai/>。
> GEIR,全称Graph Engine Intermediate Representation,类似ONNX,是华为定义的针对机器学习所设计的开放式的文件格式,能更好地适配Ascend AI处理器。
> AIR,全称Ascend Intermediate Representation,类似ONNX,是华为定义的针对机器学习所设计的开放式的文件格式,能更好地适配Ascend AI处理器。
> ACL,全称Ascend Computer Language,提供Device管理、Context管理、Stream管理、内存管理、模型加载与执行、算子加载与执行、媒体数据处理等C++ API库,供用户开发深度神经网络应用。它匹配Ascend AI处理器,使能硬件的运行管理、资源管理能力。
......@@ -107,13 +107,13 @@ CPU | ONNX格式 | 支持ONNX推理的runtime/SDK,如TensorRT。
## Ascend 310 AI处理器上推理
### 使用ONNX与GEIR格式文件推理
### 使用ONNX与AIR格式文件推理
Ascend 310 AI处理器上搭载了ACL框架,他支持OM格式,而OM格式需要从ONNX或者GEIR模型进行转换。所以在Ascend 310 AI处理器上推理,需要下述两个步骤:
Ascend 310 AI处理器上搭载了ACL框架,他支持OM格式,而OM格式需要从ONNX或者AIR模型进行转换。所以在Ascend 310 AI处理器上推理,需要下述两个步骤:
1. 在训练平台上生成ONNX或GEIR格式模型,具体步骤请参考[模型导出-导出GEIR模型和ONNX模型](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#geironnx)
1. 在训练平台上生成ONNX或AIR格式模型,具体步骤请参考[模型导出-导出AIR模型和ONNX模型](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#aironnx)
2. 将ONNX/GEIR格式模型文件,转化为OM格式模型,并进行推理。
2. 将ONNX/AIR格式模型文件,转化为OM格式模型,并进行推理。
- 云上(ModelArt环境),请参考[Ascend910训练和Ascend310推理的样例](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html)完成推理操作。
- 本地的裸机环境(对比云上环境,即本地有Ascend 310 AI 处理器),请参考Ascend 310 AI处理器配套软件包的说明文档。
......@@ -125,7 +125,7 @@ Ascend 310 AI处理器上搭载了ACL框架,他支持OM格式,而OM格式需
### 使用ONNX格式文件推理
1. 在训练平台上生成ONNX格式模型,具体步骤请参考[模型导出-导出GEIR模型和ONNX模型](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#geironnx)
1. 在训练平台上生成ONNX格式模型,具体步骤请参考[模型导出-导出AIR模型和ONNX模型](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#aironnx)
2. 在GPU上进行推理,具体可以参考推理使用runtime/SDK的文档。如在Nvidia GPU上进行推理,使用常用的TensorRT,可参考[TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt)
......@@ -137,7 +137,7 @@ Ascend 310 AI处理器上搭载了ACL框架,他支持OM格式,而OM格式需
### 使用ONNX格式文件推理
与在GPU上进行推理类似,需要以下几个步骤:
1. 在训练平台上生成ONNX格式模型,具体步骤请参考[模型导出-导出GEIR模型和ONNX模型](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#geironnx)
1. 在训练平台上生成ONNX格式模型,具体步骤请参考[模型导出-导出AIR模型和ONNX模型](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#aironnx)
2. 在CPU上进行推理,具体可以参考推理使用runtime/SDK的文档。如使用ONNX Runtime,可以参考[ONNX Runtime说明文档](https://github.com/microsoft/onnxruntime)
......
......@@ -9,7 +9,7 @@
- [模型参数加载](#模型参数加载)
- [用于推理验证](#用于推理验证)
- [用于再训练场景](#用于再训练场景)
- [导出GEIR模型和ONNX模型](#导出geir模型和onnx模型)
- [导出AIR模型和ONNX模型](#导出air模型和onnx模型)
<!-- /TOC -->
......@@ -141,9 +141,9 @@ model.train(epoch, dataset)
`load_checkpoint`方法会返回一个参数字典,`load_param_into_net`会把参数字典中相应的参数加载到网络或优化器中。
## 导出GEIR模型和ONNX模型
当有了CheckPoint文件后,如果想继续做推理,就需要根据网络和CheckPoint生成对应的模型,当前我们支持基于昇腾AI处理器的GEIR模型导出和基于GPU的通用ONNX模型的导出。
下面以GEIR为例说明模型导出的实现,代码如下:
## 导出AIR模型和ONNX模型
当有了CheckPoint文件后,如果想继续做推理,就需要根据网络和CheckPoint生成对应的模型,当前我们支持基于昇腾AI处理器的AIR模型导出和通用ONNX模型的导出。
下面以AIR为例说明模型导出的实现,代码如下:
```python
from mindspore.train.serialization import export
import numpy as np
......@@ -153,8 +153,25 @@ param_dict = load_checkpoint("resnet50-2_32.ckpt")
# load the parameter into net
load_param_into_net(resnet, param_dict)
input = np.random.uniform(0.0, 1.0, size = [32, 3, 224, 224]).astype(np.float32)
export(resnet, Tensor(input), file_name = 'resnet50-2_32.pb', file_format = 'GEIR')
export(resnet, Tensor(input), file_name = 'resnet50-2_32.pb', file_format = 'AIR')
```
使用`export`接口之前,需要先导入`mindspore.train.serialization`
`input`用来指定导出模型的输入shape以及数据类型。
如果要导出ONNX模型,只需要将`export`接口中的`file_format`参数指定为ONNX即可:`file_format = 'ONNX'`
## 导出MINDIR模型
如果想将训练好的模型用于端测推理,就需要将网络和CheckPoint生成对应的MINDIR模型,当前我们支持基于静态图,不包含控制流语义的推理网络导出。
下面以MINDIR为例说明模型导出的实现,代码如下:
```python
from mindspore.train.serialization import export
import numpy as np
resnet = ResNet50()
# return a parameter dict for model
param_dict = load_checkpoint("resnet50-2_32.ckpt")
# load the parameter into net
load_param_into_net(resnet, param_dict)
input = np.random.uniform(0.0, 1.0, size = [32, 3, 224, 224]).astype(np.float32)
export(resnet, Tensor(input), file_name = 'resnet50-2_32.mindir', file_format = 'MINDIR')
```
使用`export`接口之前,需要先导入`mindspore.train.serialization`
`input`用来指定导出模型的输入shape以及数据类型。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册