提交 4e2e2c70 编写于 作者: Z zengyawen

update docs

Signed-off-by: Nzengyawen <zengyawen1@huawei.com>
上级 b08387ce
......@@ -11,6 +11,7 @@
- [Image](image.md)
- [Rawfile](rawfile.md)
- [MindSpore](_mind_spore.md)
- [NeuralNeworkRuntime](_neural_nework_runtime.md)
- [AudioDecoder](_audio_decoder.md)
- [AudioEncoder](_audio_encoder.md)
- [CodecBase](_codec_base.md)
......@@ -48,6 +49,8 @@
- [status.h](status_8h.md)
- [tensor.h](tensor_8h.md)
- [types.h](types_8h.md)
- [neural_network_runtime_type.h](neural__network__runtime__type_8h.md)
- [neural_network_runtime.h](neural__network__runtime_8h.md)
- [native_avcodec_audiodecoder.h](native__avcodec__audiodecoder_8h.md)
- [native_avcodec_audioencoder.h](native__avcodec__audioencoder_8h.md)
- [native_avcodec_base.h](native__avcodec__base_8h.md)
......@@ -76,6 +79,10 @@
- [OH_AI_CallBackParam](_o_h___a_i___call_back_param.md)
- [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md)
- [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md)
- [OH_NN_Memory](_o_h___n_n___memory.md)
- [OH_NN_QuantParam](_o_h___n_n___quant_param.md)
- [OH_NN_Tensor](_o_h___n_n___tensor.md)
- [OH_NN_UInt32Array](_o_h___n_n___u_int32_array.md)
- [OH_AVCodecAsyncCallback](_o_h___a_v_codec_async_callback.md)
- [OH_AVCodecBufferAttr](_o_h___a_v_codec_buffer_attr.md)
- [OH_Huks_Blob](_o_h___huks___blob.md)
......
# OH_NN_Memory
## 概述
内存结构体。
**起始版本:**
9
**相关模块:**
[NeuralNeworkRuntime](_neural_nework_runtime.md)
## 汇总
### 成员变量
| 成员变量名称 | 描述 |
| -------- | -------- |
| [data](#data) | 指向共享内存的指针,该共享内存通常由底层硬件驱动申请。 |
| [length](#length) | 记录共享内存的字节长度。 |
## 结构体成员变量说明
### data
```
void* const OH_NN_Memory::data
```
**描述:**
指向共享内存的指针,该共享内存通常由底层硬件驱动申请。
### length
```
const size_t OH_NN_Memory::length
```
**描述:**
记录共享内存的字节长度。
# OH_NN_QuantParam
## 概述
量化信息。
在量化的场景中,32位浮点型数据根据以下公式量化为定点数据:
![zh-cn_formulaimage_0000001405137102](figures/zh-cn_formulaimage_0000001405137102.png)
其中s和z是量化参数,在OH_NN_QuanParam中通过scale和zeroPoint保存,r是浮点数,q是量化后的结果,q_min是量化后下界,q_max是量化后的上界,计算方式如下:
![zh-cn_formulaimage_0000001459019845](figures/zh-cn_formulaimage_0000001459019845.png)
![zh-cn_formulaimage_0000001408820090](figures/zh-cn_formulaimage_0000001408820090.png)
clamp函数定义如下:
![zh-cn_formulaimage_0000001455538697](figures/zh-cn_formulaimage_0000001455538697.png)
**起始版本:**
9
**相关模块:**
[NeuralNeworkRuntime](_neural_nework_runtime.md)
## 汇总
### 成员变量
| 成员变量名称 | 描述 |
| -------- | -------- |
| [quantCount](#quantcount) | 指定numBits、scale和zeroPoint数组的长度。<br/>在per-layer量化的场景下,quantCount通常指定为1,即一个张量所有通道共享一套量化参数;在per-channel量化场景下,quantCount通常和张量通道数一致,每个通道使用自己的量化参数。 |
| [numBits](#numbits) | 量化位数。 |
| [scale](#scale) | 指向量化公式中量化参数s的指针。 |
| [zeroPoint](#zeropoint) | 指向量化公式中量化参数z的指针。 |
## 结构体成员变量说明
### numBits
```
const uint32_t* OH_NN_QuantParam::numBits
```
**描述:**
量化位数。
### quantCount
```
uint32_t OH_NN_QuantParam::quantCount
```
**描述:**
指定numBits、scale和zeroPoint数组的长度。
在per-layer量化的场景下,quantCount通常指定为1,即一个张量所有通道 共享一套量化参数;在per-channel量化场景下,quantCount通常和张量通道数一致,每个通道使用自己的量化参数。
### scale
```
const double* OH_NN_QuantParam::scale
```
**描述:**
指向量化公式中scale数据的指针。
### zeroPoint
```
const int32_t* OH_NN_QuantParam::zeroPoint
```
**描述:**
指向量化公式中zero point数据的指针。
# OH_NN_Tensor
## 概述
张量结构体。
通常用于构造模型图中的数据节点和算子参数,在构造张量时需要明确数据类型、维数、维度信息和量化信息。
**起始版本:**
9
**相关模块:**
[NeuralNeworkRuntime](_neural_nework_runtime.md)
## 汇总
### 成员变量
| 成员变量名称 | 描述 |
| -------- | -------- |
| [dataType](#datatype) | 指定张量的数据类型,要求从[OH_NN_DataType](_neural_nework_runtime.md#ohnndatatype)枚举类型中取值。 |
| [dimensionCount](#dimensioncount) | 指定张量的维数。 |
| [dimensions](#dimensions) | 指定张量的维度信息(形状)。 |
| [quantParam](#quantparam) | 指定张量的量化信息,数据类型要求为[OH_NN_QuantParam](_o_h___n_n___quant_param.md)。 |
| [type](#type) | 指定张量的类型,type的取值和张量的用途相关。<br/>当张量作为模型的输入或输出,则要求type设置为OH_NN_TENSOR;<br/>当张量作为算子参数,则要求从[OH_NN_TensorType](_neural_nework_runtime.md#ohnntensortype)枚举类型中选择除OH_NN_TENSOR之外的合适的枚举值。 |
## 结构体成员变量说明
### dataType
```
OH_NN_DataType OH_NN_Tensor::dataType
```
**描述:**
指定张量的数据类型,要求从[OH_NN_DataType](_neural_nework_runtime.md#oh_nn_datatype)枚举类型中取值。
### dimensionCount
```
uint32_t OH_NN_Tensor::dimensionCount
```
**描述:**
指定张量的维数。
### dimensions
```
const int32_t* OH_NN_Tensor::dimensions
```
**描述:**
指定张量的维度信息(形状)。
### quantParam
```
const OH_NN_QuantParam* OH_NN_Tensor::quantParam
```
**描述:**
指定张量的量化信息,数据类型要求为[OH_NN_QuantParam](_o_h___n_n___quant_param.md)
### type
```
OH_NN_TensorType OH_NN_Tensor::type
```
**描述:**
指定张量的类型,type的取值和张量的用途相关。
当张量作为模型的输入或输出,则要求type设置为OH_NN_TENSOR;
当张量作为算子参数,则要求从[OH_NN_TensorType](_neural_nework_runtime.md#oh_nn_tensortype)枚举类型中选择除OH_NN_TENSOR之外的合适的枚举值。
# OH_NN_UInt32Array
## 概述
该结构体用于存储32位无符号整型数组。
**起始版本:**
9
**相关模块:**
[NeuralNeworkRuntime](_neural_nework_runtime.md)
## 汇总
### 成员变量
| 成员变量名称 | 描述 |
| -------- | -------- |
| [data](#data) | 无符号整型数组的指针。 |
| [size](#size) | 数组长度。 |
## 结构体成员变量说明
### data
```
uint32_t* OH_NN_UInt32Array::data
```
**描述:**
无符号整型数组的指针。
### size
```
uint32_t OH_NN_UInt32Array::size
```
**描述:**
数组长度。
# neural_network_runtime.h
## 概述
Neural Network Runtime部件接口定义,AI推理框架通过Neural Network Runtime提供的Native接口,完成模型构造与编译,并在加速硬件上执行推理计算。
**起始版本:**
9
**相关模块:**
[NeuralNeworkRuntime](_neural_nework_runtime.md)
## 汇总
### 函数
| 函数名称 | 描述 |
| -------- | -------- |
| [OH_NNModel_Construct](_neural_nework_runtime.md#oh_nnmodel_construct) (void) | 创建[OH_NNModel](_neural_nework_runtime.md#oh_nnmodel)类型的模型实例,搭配OH_NNModel模块提供的其他接口,完成模型实例的构造。 |
| [OH_NNModel_AddTensor](_neural_nework_runtime.md#oh_nnmodel_addtensor) (OH_NNModel \*model, const OH_NN_Tensor \*tensor) | 向模型实例中添加张量。 |
| [OH_NNModel_SetTensorData](_neural_nework_runtime.md#oh_nnmodel_settensordata) (OH_NNModel \*model, uint32_t index, const void \*dataBuffer, size_t length) | 设置张量的数值。 |
| [OH_NNModel_AddOperation](_neural_nework_runtime.md#oh_nnmodel_addoperation) (OH_NNModel \*model, OH_NN_OperationType op, const OH_NN_UInt32Array \*paramIndices, const OH_NN_UInt32Array \*inputIndices, const OH_NN_UInt32Array \*outputIndices) | 向模型实例中添加算子。 |
| [OH_NNModel_SpecifyInputsAndOutputs](_neural_nework_runtime.md#oh_nnmodel_specifyinputsandoutputs) (OH_NNModel \*model, const OH_NN_UInt32Array \*inputIndices, const OH_NN_UInt32Array \*outputIndices) | 指定模型的输入输出。 |
| [OH_NNModel_Finish](_neural_nework_runtime.md#oh_nnmodel_finish) (OH_NNModel \*model) | 完成模型构图。 |
| [OH_NNModel_Destroy](_neural_nework_runtime.md#oh_nnmodel_destroy) (OH_NNModel \*\*model) | 释放模型实例。 |
| [OH_NNModel_GetAvailableOperations](_neural_nework_runtime.md#oh_nnmodel_getavailableoperations) (OH_NNModel \*model, size_t deviceID, const bool \*\*isSupported, uint32_t \*opCount) | 查询硬件对模型内所有算子的支持情况,通过布尔值序列指示支持情况。 |
| [OH_NNCompilation_Construct](_neural_nework_runtime.md#oh_nncompilation_construct) (const OH_NNModel \*model) | 创建[OH_NNCompilation](_neural_nework_runtime.md#oh_nncompilation)类型的编译实例。 |
| [OH_NNCompilation_SetDevice](_neural_nework_runtime.md#oh_nncompilation_setdevice) (OH_NNCompilation \*compilation, size_t deviceID) | 指定模型编译和计算的硬件。 |
| [OH_NNCompilation_SetCache](_neural_nework_runtime.md#oh_nncompilation_setcache) (OH_NNCompilation \*compilation, const char \*cachePath, uint32_t version) | 设置编译后的模型缓存路径和缓存版本。 |
| [OH_NNCompilation_SetPerformanceMode](_neural_nework_runtime.md#oh_nncompilation_setperformancemode) (OH_NNCompilation \*compilation, OH_NN_PerformanceMode performanceMode) | 设置模型计算的性能模式。 |
| [OH_NNCompilation_SetPriority](_neural_nework_runtime.md#oh_nncompilation_setpriority) (OH_NNCompilation \*compilation, OH_NN_Priority priority) | 设置模型计算的优先级。 |
| [OH_NNCompilation_EnableFloat16](_neural_nework_runtime.md#oh_nncompilation_enablefloat16) (OH_NNCompilation \*compilation, bool enableFloat16) | 是否以float16的浮点数精度计算。 |
| [OH_NNCompilation_Build](_neural_nework_runtime.md#oh_nncompilation_build) (OH_NNCompilation \*compilation) | 进行模型编译。 |
| [OH_NNCompilation_Destroy](_neural_nework_runtime.md#oh_nncompilation_destroy) (OH_NNCompilation \*\*compilation) | 释放Compilation对象。 |
| [OH_NNExecutor_Construct](_neural_nework_runtime.md#oh_nnexecutor_construct) (OH_NNCompilation \*compilation) | 创建[OH_NNExecutor](_neural_nework_runtime.md#oh_nnexecutor)类型的执行器实例。 |
| [OH_NNExecutor_SetInput](_neural_nework_runtime.md#oh_nnexecutor_setinput) (OH_NNExecutor \*executor, uint32_t inputIndex, const OH_NN_Tensor \*tensor, const void \*dataBuffer, size_t length) | 设置模型单个输入的数据。 |
| [OH_NNExecutor_SetOutput](_neural_nework_runtime.md#oh_nnexecutor_setoutput) (OH_NNExecutor \*executor, uint32_t outputIndex, void \*dataBuffer, size_t length) | 设置模型单个输出的缓冲区。 |
| [OH_NNExecutor_GetOutputShape](_neural_nework_runtime.md#oh_nnexecutor_getoutputshape) (OH_NNExecutor \*executor, uint32_t outputIndex, int32_t \*\*shape, uint32_t \*shapeLength) | 获取输出张量的维度信息。 |
| [OH_NNExecutor_Run](_neural_nework_runtime.md#oh_nnexecutor_run) (OH_NNExecutor \*executor) | 执行推理。 |
| [OH_NNExecutor_AllocateInputMemory](_neural_nework_runtime.md#oh_nnexecutor_allocateinputmemory) (OH_NNExecutor \*executor, uint32_t inputIndex, size_t length) | 在硬件上为单个输入申请共享内存。 |
| [OH_NNExecutor_AllocateOutputMemory](_neural_nework_runtime.md#oh_nnexecutor_allocateoutputmemory) (OH_NNExecutor \*executor, uint32_t outputIndex, size_t length) | 在硬件上为单个输出申请共享内存。 |
| [OH_NNExecutor_DestroyInputMemory](_neural_nework_runtime.md#oh_nnexecutor_destroyinputmemory) (OH_NNExecutor \*executor, uint32_t inputIndex, OH_NN_Memory \*\*memory) | 释放[OH_NN_Memory](_o_h___n_n___memory.md)实例指向的输入内存。 |
| [OH_NNExecutor_DestroyOutputMemory](_neural_nework_runtime.md#oh_nnexecutor_destroyoutputmemory) (OH_NNExecutor \*executor, uint32_t outputIndex, OH_NN_Memory \*\*memory) | 释放[OH_NN_Memory](_o_h___n_n___memory.md)实例指向的输出内存。 |
| [OH_NNExecutor_SetInputWithMemory](_neural_nework_runtime.md#oh_nnexecutor_setinputwithmemory) (OH_NNExecutor \*executor, uint32_t inputIndex, const OH_NN_Tensor \*tensor, const OH_NN_Memory \*memory) | 将[OH_NN_Memory](_o_h___n_n___memory.md)实例指向的硬件共享内存,指定为单个输入使用的共享内存。 |
| [OH_NNExecutor_SetOutputWithMemory](_neural_nework_runtime.md#oh_nnexecutor_setoutputwithmemory) (OH_NNExecutor \*executor, uint32_t outputIndex, const OH_NN_Memory \*memory) | 将[OH_NN_Memory](_o_h___n_n___memory.md)实例指向的硬件共享内存,指定为单个输出使用的共享内存。 |
| [OH_NNExecutor_Destroy](_neural_nework_runtime.md#oh_nnexecutor_destroy) (OH_NNExecutor \*\*executor) | 销毁执行器实例,释放执行器占用的内存。 |
| [OH_NNDevice_GetAllDevicesID](_neural_nework_runtime.md#oh_nndevice_getalldevicesid) (const size_t \*\*allDevicesID, uint32_t \*deviceCount) | 获取对接到 Neural Network Runtime 的硬件ID。 |
| [OH_NNDevice_GetName](_neural_nework_runtime.md#oh_nndevice_getname) (size_t deviceID, const char \*\*name) | 获取指定硬件的类型信息。 |
| [OH_NNDevice_GetType](_neural_nework_runtime.md#oh_nndevice_gettype) (size_t deviceID, [OH_NN_DeviceType](_neural_nework_runtime.md#oh_nn_devicetype) \*deviceType) | 获取指定硬件的类别信息。 |
# neural_network_runtime_type.h
## 概述
Neural Network Runtime定义的结构体和枚举值。
**起始版本:**
9
**相关模块:**
[NeuralNeworkRuntime](_neural_nework_runtime.md)
## 汇总
### 结构体
| 结构体名称 | 描述 |
| -------- | -------- |
| [OH_NN_UInt32Array](_o_h___n_n___u_int32_array.md) | 自定义的32位无符号整型数组类型。 |
| [OH_NN_QuantParam](_o_h___n_n___quant_param.md) | 量化信息。 |
| [OH_NN_Tensor](_o_h___n_n___tensor.md) | 张量结构体。 |
| [OH_NN_Memory](_o_h___n_n___memory.md) | 内存结构体。 |
### 类型定义
| 类型定义名称 | 描述 |
| -------- | -------- |
| [OH_NNModel](_neural_nework_runtime.md#oh_nnmodel) | Neural Network Runtime的模型句柄。 |
| [OH_NNCompilation](_neural_nework_runtime.md#oh_nncompilation) | Neural Network Runtime的编译器句柄。 |
| [OH_NNExecutor](_neural_nework_runtime.md#oh_nnexecutor) | Neural Network Runtime的执行器句柄。 |
| [OH_NN_UInt32Array](_neural_nework_runtime.md#oh_nn_uint32array) | 自定义的32位无符号整型数组类型。 |
| [OH_NN_QuantParam](_neural_nework_runtime.md#oh_nn_quantparam) | 量化信息。 |
| [OH_NN_Tensor](_neural_nework_runtime.md#oh_nn_tensor) | 张量结构体。 |
| [OH_NN_Memory](_neural_nework_runtime.md#oh_nn_memory) | 内存结构体。 |
### 枚举
| 枚举名称 | 描述 |
| -------- | -------- |
| [OH_NN_PerformanceMode](_neural_nework_runtime.md#oh_nn_performancemode) { OH_NN_PERFORMANCE_NONE = 0, OH_NN_PERFORMANCE_LOW = 1, OH_NN_PERFORMANCE_MEDIUM = 2, OH_NN_PERFORMANCE_HIGH = 3, OH_NN_PERFORMANCE_EXTREME = 4 } | 硬件的性能模式。 |
| [OH_NN_Priority](_neural_nework_runtime.md#oh_nn_priority) { OH_NN_PRIORITY_NONE = 0, OH_NN_PRIORITY_LOW = 1, OH_NN_PRIORITY_MEDIUM = 2, OH_NN_PRIORITY_HIGH = 3 } | 模型推理任务优先级。 |
| [OH_NN_ReturnCode](_neural_nework_runtime.md#oh_nn_returncode) { OH_NN_SUCCESS = 0, OH_NN_FAILED = 1, OH_NN_INVALID_PARAMETER = 2, OH_NN_MEMORY_ERROR = 3, OH_NN_OPERATION_FORBIDDEN = 4, OH_NN_NULL_PTR = 5, OH_NN_INVALID_FILE = 6, OH_NN_UNAVALIDABLE_DEVICE = 7, OH_NN_INVALID_PATH = 8 } | Neural Network Runtime定义的错误码类型。 |
| [OH_NN_FuseType](_neural_nework_runtime.md#oh_nn_fusetype) : int8_t { OH_NN_FUSED_NONE = 0, OH_NN_FUSED_RELU = 1, OH_NN_FUSED_RELU6 = 2 } | Neural Network Runtime融合算子中激活函数的类型。 |
| [OH_NN_Format](_neural_nework_runtime.md#oh_nn_format) { OH_NN_FORMAT_NONE = 0, OH_NN_FORMAT_NCHW = 1, OH_NN_FORMAT_NHWC = 2 } | 张量数据的排布类型。 |
| [OH_NN_DeviceType](_neural_nework_runtime.md#oh_nn_devicetype) { OH_NN_OTHERS = 0, OH_NN_CPU = 1, OH_NN_GPU = 2, OH_NN_ACCELERATOR = 3 } | Neural Network Runtime支持的设备类型。 |
| [OH_NN_DataType](_neural_nework_runtime.md#oh_nn_datatype) { OH_NN_UNKNOWN = 0, OH_NN_BOOL = 1, OH_NN_INT8 = 2, OH_NN_INT16 = 3, OH_NN_INT32 = 4, OH_NN_INT64 = 5, OH_NN_UINT8 = 6, OH_NN_UINT16 = 7, OH_NN_UINT32 = 8, OH_NN_UINT64 = 9, OH_NN_FLOAT16 = 10, OH_NN_FLOAT32 = 11, OH_NN_FLOAT64 = 12 } | Neural Network Runtime支持的数据类型。 |
| [OH_NN_OperationType](_neural_nework_runtime.md#oh_nn_operationtype) { OH_NN_OPS_ADD = 1, OH_NN_OPS_AVG_POOL = 2, OH_NN_OPS_BATCH_NORM = 3, OH_NN_OPS_BATCH_TO_SPACE_ND = 4, OH_NN_OPS_BIAS_ADD = 5, OH_NN_OPS_CAST = 6, OH_NN_OPS_CONCAT = 7, OH_NN_OPS_CONV2D = 8, OH_NN_OPS_CONV2D_TRANSPOSE = 9, OH_NN_OPS_DEPTHWISE_CONV2D_NATIVE = 10, OH_NN_OPS_DIV = 11, OH_NN_OPS_ELTWISE = 12, OH_NN_OPS_EXPAND_DIMS = 13, OH_NN_OPS_FILL = 14, OH_NN_OPS_FULL_CONNECTION = 15, OH_NN_OPS_GATHER = 16, OH_NN_OPS_HSWISH = 17, OH_NN_OPS_LESS_EQUAL = 18, OH_NN_OPS_MATMUL = 19, OH_NN_OPS_MAXIMUM = 20, OH_NN_OPS_MAX_POOL = 21, OH_NN_OPS_MUL = 22, OH_NN_OPS_ONE_HOT = 23, OH_NN_OPS_PAD = 24, OH_NN_OPS_POW = 25, OH_NN_OPS_SCALE = 26, OH_NN_OPS_SHAPE = 27, OH_NN_OPS_SIGMOID = 28, OH_NN_OPS_SLICE = 29, OH_NN_OPS_SOFTMAX = 30, OH_NN_OPS_SPACE_TO_BATCH_ND = 31, OH_NN_OPS_SPLIT = 32, OH_NN_OPS_SQRT = 33, OH_NN_OPS_SQUARED_DIFFERENCE = 34, OH_NN_OPS_SQUEEZE = 35, OH_NN_OPS_STACK = 36, OH_NN_OPS_STRIDED_SLICE = 37, OH_NN_OPS_SUB = 38, OH_NN_OPS_TANH = 39, OH_NN_OPS_TILE = 40, OH_NN_OPS_TRANSPOSE = 41, OH_NN_OPS_REDUCE_MEAN = 42, OH_NN_OPS_RESIZE_BILINEAR = 43, OH_NN_OPS_RSQRT = 44, OH_NN_OPS_RESHAPE = 45, OH_NN_OPS_PRELU = 46, OH_NN_OPS_RELU = 47, OH_NN_OPS_RELU6 = 48, OH_NN_OPS_LAYER_NORM = 49, OH_NN_OPS_REDUCE_PROD = 50, OH_NN_OPS_REDUCE_ALL = 51, OH_NN_OPS_QUANT_DTYPE_CAST = 52, OH_NN_OPS_TOP_K = 53, OH_NN_OPS_ARG_MAX = 54, OH_NN_OPS_UNSQUEEZE = 55, OH_NN_OPS_GELU = 56 } | Neural Network Runtime支持算子的类型。 |
| [OH_NN_TensorType](_neural_nework_runtime.md#oh_nn_tensortype) { OH_NN_TENSOR = 0, OH_NN_ADD_ACTIVATIONTYPE = 1, OH_NN_AVG_POOL_KERNEL_SIZE = 2, OH_NN_AVG_POOL_STRIDE = 3, OH_NN_AVG_POOL_PAD_MODE = 4, OH_NN_AVG_POOL_PAD = 5, OH_NN_AVG_POOL_ACTIVATION_TYPE = 6, OH_NN_BATCH_NORM_EPSILON = 7, OH_NN_BATCH_TO_SPACE_ND_BLOCKSIZE = 8, OH_NN_BATCH_TO_SPACE_ND_CROPS = 9, OH_NN_CONCAT_AXIS = 10, OH_NN_CONV2D_STRIDES = 11, OH_NN_CONV2D_PAD = 12, OH_NN_CONV2D_DILATION = 13, OH_NN_CONV2D_PAD_MODE = 14, OH_NN_CONV2D_ACTIVATION_TYPE = 15, OH_NN_CONV2D_GROUP = 16, OH_NN_CONV2D_TRANSPOSE_STRIDES = 17, OH_NN_CONV2D_TRANSPOSE_PAD = 18, OH_NN_CONV2D_TRANSPOSE_DILATION = 19, OH_NN_CONV2D_TRANSPOSE_OUTPUT_PADDINGS = 20, OH_NN_CONV2D_TRANSPOSE_PAD_MODE = 21, OH_NN_CONV2D_TRANSPOSE_ACTIVATION_TYPE = 22, OH_NN_CONV2D_TRANSPOSE_GROUP = 23, OH_NN_DEPTHWISE_CONV2D_NATIVE_STRIDES = 24, OH_NN_DEPTHWISE_CONV2D_NATIVE_PAD = 25, OH_NN_DEPTHWISE_CONV2D_NATIVE_DILATION = 26, OH_NN_DEPTHWISE_CONV2D_NATIVE_PAD_MODE = 27, OH_NN_DEPTHWISE_CONV2D_NATIVE_ACTIVATION_TYPE = 28, OH_NN_DIV_ACTIVATIONTYPE = 29, OH_NN_ELTWISE_MODE = 30, OH_NN_FULL_CONNECTION_AXIS = 31, OH_NN_FULL_CONNECTION_ACTIVATIONTYPE = 32, OH_NN_MATMUL_TRANSPOSE_A = 33, OH_NN_MATMUL_TRANSPOSE_B = 34, OH_NN_MATMUL_ACTIVATION_TYPE = 35, OH_NN_MAX_POOL_KERNEL_SIZE = 36, OH_NN_MAX_POOL_STRIDE = 37, OH_NN_MAX_POOL_PAD_MODE = 38, OH_NN_MAX_POOL_PAD = 39, OH_NN_MAX_POOL_ACTIVATION_TYPE = 40, OH_NN_MUL_ACTIVATION_TYPE = 41, OH_NN_ONE_HOT_AXIS = 42, OH_NN_PAD_CONSTANT_VALUE = 43, OH_NN_SCALE_ACTIVATIONTYPE = 44, OH_NN_SCALE_AXIS = 45, OH_NN_SOFTMAX_AXIS = 46, OH_NN_SPACE_TO_BATCH_ND_BLOCK_SHAPE = 47, OH_NN_SPACE_TO_BATCH_ND_PADDINGS = 48, OH_NN_SPLIT_AXIS = 49, OH_NN_SPLIT_OUTPUT_NUM = 50, OH_NN_SPLIT_SIZE_SPLITS = 51, OH_NN_SQUEEZE_AXIS = 52, OH_NN_STACK_AXIS = 53, OH_NN_STRIDED_SLICE_BEGIN_MASK = 54, OH_NN_STRIDED_SLICE_END_MASK = 55, OH_NN_STRIDED_SLICE_ELLIPSIS_MASK = 56, OH_NN_STRIDED_SLICE_NEW_AXIS_MASK = 57, OH_NN_STRIDED_SLICE_SHRINK_AXIS_MASK = 58, OH_NN_SUB_ACTIVATIONTYPE = 59, OH_NN_REDUCE_MEAN_KEEP_DIMS = 60, OH_NN_RESIZE_BILINEAR_NEW_HEIGHT = 61, OH_NN_RESIZE_BILINEAR_NEW_WIDTH = 62, OH_NN_RESIZE_BILINEAR_PRESERVE_ASPECT_RATIO = 63, OH_NN_RESIZE_BILINEAR_COORDINATE_TRANSFORM_MODE = 64, OH_NN_RESIZE_BILINEAR_EXCLUDE_OUTSIDE = 65, OH_NN_LAYER_NORM_BEGIN_NORM_AXIS = 66, OH_NN_LAYER_NORM_EPSILON = 67, OH_NN_LAYER_NORM_BEGIN_PARAM_AXIS = 68, OH_NN_LAYER_NORM_ELEMENTWISE_AFFINE = 69, OH_NN_REDUCE_PROD_KEEP_DIMS = 70, OH_NN_REDUCE_ALL_KEEP_DIMS = 71, OH_NN_QUANT_DTYPE_CAST_SRC_T = 72, OH_NN_QUANT_DTYPE_CAST_DST_T = 73, OH_NN_TOP_K_SORTED = 74, OH_NN_ARG_MAX_AXIS = 75, OH_NN_ARG_MAX_KEEPDIMS = 76, OH_NN_UNSQUEEZE_AXIS = 77 } | 张量的类型。 |
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册