diff --git a/en/application-dev/dfx/hilog-guidelines.md b/en/application-dev/dfx/hilog-guidelines.md index a399a520ba96aa3a64eea8968ed359c05da5440b..25b4a7f9cc5c92d9f20ed6582299d5dd65b937d0 100644 --- a/en/application-dev/dfx/hilog-guidelines.md +++ b/en/application-dev/dfx/hilog-guidelines.md @@ -38,7 +38,7 @@ Log level. | Name | Value | Description | | ----- | ------ | ------------------------------------------------------------ | | DEBUG | 3 | Log level used to record more detailed process information than INFO logs to help developers analyze service processes and locate faults.| -| INFO | 4 | Log level used to record key service process nodes and exceptions that occur during service running,
Log level used to record information about unexpected exceptions, such as network signal loss and login failure.
These logs should be recorded by the dominant module in the service to avoid repeated logging conducted by multiple invoked modules or low-level functions.| +| INFO | 4 | Log level used to record key service process nodes and exceptions that occur during service running as well as information about unexpected exceptions, such as network signal loss and login failure.
These logs should be recorded by the dominant module in the service to avoid repeated logging conducted by multiple invoked modules or low-level functions.| | WARN | 5 | Log level used to record severe, unexpected faults that have little impact on users and can be rectified by the programs themselves or through simple operations.| | ERROR | 6 | Log level used to record program or functional errors that affect the normal running or use of the functionality and can be fixed at a high cost, for example, by resetting data.| | FATAL | 7 | Log level used to record program or functionality crashes that cannot be rectified. diff --git a/en/application-dev/napi/Readme-EN.md b/en/application-dev/napi/Readme-EN.md index 5148a2b5ce3cf73bbd68e96ba2d15449f86a0990..162a07f752491d75d2bf17a3c90520155fd3672f 100644 --- a/en/application-dev/napi/Readme-EN.md +++ b/en/application-dev/napi/Readme-EN.md @@ -5,5 +5,6 @@ - [Raw File Development](rawfile-guidelines.md) - [Native Window Development](native-window-guidelines.md) - [Using MindSpore Lite for Model Inference](mindspore-lite-guidelines.md) +- [Using MindSpore Lite for Offline Model Conversion and Inference](mindspore-lite-offline-model-guidelines.md) - [Connecting the Neural Network Runtime to an AI Inference Framework](neural-network-runtime-guidelines.md) - [Purgeable Memory Development](purgeable-memory-guidelines.md) diff --git a/en/application-dev/napi/mindspore-lite-offline-model-guidelines.md b/en/application-dev/napi/mindspore-lite-offline-model-guidelines.md new file mode 100644 index 0000000000000000000000000000000000000000..0f149d197cd7ec4f22bb715d75994c09d7577497 --- /dev/null +++ b/en/application-dev/napi/mindspore-lite-offline-model-guidelines.md @@ -0,0 +1,112 @@ +# Using MindSpore Lite for Offline Model Conversion and Inference + +## Basic Concepts + +- MindSpore Lite: a built-in AI inference engine of OpenHarmony that provides inference deployment for deep learning models. + +- Neural Network Runtime (NNRt): a bridge that connects the upper-layer AI inference framework to the bottom-layer acceleration chip to implement cross-chip inference and computing of AI models. + +- Offline model: a model obtained using the offline model conversion tool of the AI hardware vendor. The hardware vendor is responsible for parsing and inference of AI models. + +## When to Use + +The common process for MindSpore Lite AI model deployment is as follows: +- Use the MindSpore Lite model conversion tool to convert third-party models (such as ONNX and CAFFE) to `.ms` models. +- Call APIs of the MindSpore Lite inference engine to perform model inference. By specifying NNRt as the inference device, you can then use the AI hardware in the system to accelerate inference. + +When MindSpore Lite + NNRt inference is used, dynamic image composition in the initial phase will introduce a certain model loading delay. + +If you want to reduce the loading delay to meet the requirements of the deployment scenario, you can use offline model-based inference as an alternative. The operation procedure is as follows: +- Use the offline model conversion tool provided by the AI hardware vendor to compile an offline model in advance. +- Use the MindSpore Lite conversion tool to encapsulate the offline model as a black box into the `.ms` model. +- Pass the `.ms` model to MindSpore Lite for inference. + +During inference, MindSpore Lite directly sends the offline model to the AI hardware connected to NNRt. This way, the model can be loaded without the need for online image composition, greatly reducing the model loading delay. In addition, MindSpore Lite can provide additional hardware-specific information to assist the AI hardware in model inference. + +The following sections describe the offline model inference and conversion process in detail. + +## Constraints + +- Offline model inference can only be implemented at the NNRt backend. The AI hardware needs to connect to NNRt and supports offline model inference. + +## Offline Model Conversion + + +### 1. Building the MindSpore Lite Release Package + +Obtain the [MindSpore Lite source code](https://gitee.com/openharmony/third_party_mindspore). The source code is managed in "compressed package + patch" mode. Run the following commands to decompress the source code package and install the patch: +```bash +cd mindspore +python3 build_helper.py --in_zip_path=./mindspore-v1.8.1.zip --patch_dir=./patches/ --out_src_path=./mindspore-src +``` +If the command execution is successful, the complete MindSpore Lite source code is generated in `mindspore-src/source/`. + +Run the following commands to start building: +```bash +cd mindspore-src/source/ +bash build.sh -I x86_64 -j 8 +``` + +After the building is complete, you can obtain the MindSpore Lite release package from the `output/` directory in the root directory of the source code. + + +### 2. Writing Extended Configuration File of the Conversion Tool + +The offline model comes as a black box and cannot be parsed by the conversion tool to obtain its input and output tensor information. Therefore, you need to manually configure the tensor information in the extended configuration file of the conversion tool. Based on the extended configuration, the conversion tool can then generate the `.ms` model file for encapsulating the offline model. + +An example of the extended configuration is as follows: +- `[third_party_model]` in the first line is a fixed keyword that indicates the section of offline model configuration. +- The following lines exhibit the name, data type, shape, and memory format of the input and output tensors of the model respectively. Each field occupies a line and is expressed in the key-value pair format. The sequence of fields is not limited. +- Among the fields, data type and shape are mandatory, and other parameters are optional. +- Extended parameters are also provided. They are used to encapsulate custom configuration of the offline model into an `.ms` file in the the key-value pair format. The `.ms` file is passed to the AI hardware by NNRt during inference. + +```text +[third_party_model] +input_names=in_0;in_1 +input_dtypes=float32;float32 +input_shapes=8,256,256;8,256,256,3 +input_formats=NCHW;NCHW +output_names=out_0 +output_dtypes=float32 +output_shapes=8,64 +output_formats=NCHW +extended_parameters=key_foo:value_foo;key_bar:value_bar +``` + +The related fields are described as follows: + +- `input_names` (optional): model input name, which is in the string format. If multiple names are specified, use a semicolon (`;`) to separate them. +- `input_dtypes` (mandatory): model input data type, which is in the type format. If multiple data types are specified, use a semicolon (`;`) to separate them. +- `input_shapes` (mandatory): model input shape, which is in the integer array format. If multiple input shapes are specified, use a semicolon (`;`) to separate them. +- `input_formats` (optional): model input memory format, which is in the string format. If multiple formats are specified, use a semicolon (`;`) to separate them. The default value is `NHWC`. +- `output_names` (optional): model output name, which is in the string format. If multiple names are specified, use a semicolon (`;`) to separate them. +- `output_dtypes` (mandatory): model output data type, which is in the type format. If multiple data types are specified, use a semicolon (`;`) to separate them. +- `output_shapes` (mandatory): model output shape, which is in the integer array format. If multiple output shapes are specified, use a semicolon (`;`) to separate them. +- `output_formats` (optional): model output memory format, which is in the string format. If multiple formats are specified, use a semicolon (`;`) to separate them. The default value is `NHWC`. +- `extended_parameters` (optional): custom configuration of the inference hardware, which is in the key-value pair format. It is passed to the AI hardware through the NNRt backend during inference. + +### 3. Converting an Offline Model + +Decompress the MindSpore Lite release package obtained in step 1. Go to the directory where the conversion tool is located (that is, `tools/converter/converter/`), and run the following commands: + +```bash +export LD_LIBRARY_PATH=${PWD}/../lib +./converter_lite --fmk=THIRDPARTY --modelFile=/path/to/your_model --configFile=/path/to/your_config --outputFile=/path/to/output_model +``` +The offline model conversion is complete. + +The related parameters are described as follows: +- `--fmk`: original format of the input model. `THIRDPARTY` indicates an offline model. +- `--modelFile`: path of the input model. +- `--configFile`: path of the extended configuration file. The file is used to configure offline model information. +- `--outputFile`: path of the output model. You do not need to add the file name extension. The `.ms` suffix is generated automatically. + +> **NOTE** +> +> If `fmk` is set to `THIRDPARTY`, offline model conversion is performed. In this case, only the preceding four parameters and the extended configuration file take effect. + +## Offline Model Inference + +Offline model inference is the same as common MindSpore Lite model inference except that only NNRt devices can be added to the inference context. + +For details about the MindSpore Lite model inference process, see [Using MindSpore Lite for Model Inference](./mindspore-lite-guidelines.md). diff --git a/en/application-dev/reference/apis/js-apis-socket.md b/en/application-dev/reference/apis/js-apis-socket.md index 39f79224b6d4514a91ef34967dd06ba90adee457..4cea6beaaec94d89b21a3e59743e54cb3d7b87d8 100644 --- a/en/application-dev/reference/apis/js-apis-socket.md +++ b/en/application-dev/reference/apis/js-apis-socket.md @@ -1766,7 +1766,7 @@ promise.then(() => { setExtraOptions(options: TCPExtraOptions, callback: AsyncCallback\): void -Sets other properties of the TCPSocket connection after successful binding of the local IP address and port number of the TLSSocket connection. This API uses an asynchronous callback to return the result. +Sets other properties of the TCPSocket connection after successful binding of the local IP address and port number of the connection. This API uses an asynchronous callback to return the result. **System capability**: SystemCapability.Communication.NetStack @@ -1818,7 +1818,7 @@ tls.setExtraOptions({ setExtraOptions(options: TCPExtraOptions): Promise\ -Sets other properties of the TCPSocket connection after successful binding of the local IP address and port number of the TLSSocket connection. This API uses a promise to return the result. +Sets other properties of the TCPSocket connection after successful binding of the local IP address and port number of the connection. This API uses a promise to return the result. **System capability**: SystemCapability.Communication.NetStack diff --git a/en/application-dev/reference/native-apis/_mind_spore.md b/en/application-dev/reference/native-apis/_mind_spore.md index ded41297291a904a08793bd2cd94e2ab667a10a0..e52fc4f327ea142c3f8a0c5777c5b8ce8de67ec0 100644 --- a/en/application-dev/reference/native-apis/_mind_spore.md +++ b/en/application-dev/reference/native-apis/_mind_spore.md @@ -7,125 +7,146 @@ Provides APIs related to MindSpore Lite model inference. \@Syscap SystemCapability.Ai.MindSpore -**Since:** +**Since** + 9 + ## Summary -### Files +### File -| Name | Description | -| -------- | -------- | -| [context.h](context_8h.md) | Provides **Context** APIs for configuring runtime information.
File to Include: | -| [data_type.h](data__type_8h.md) | Declares tensor data types.
File to Include: | -| [format.h](format_8h.md) | Declares tensor data formats.
File to Include: | -| [model.h](model_8h.md) | Provides model-related APIs for model creation and inference.
File to Include: | -| [status.h](status_8h.md) | Provides the status codes of MindSpore Lite.
File to Include: | -| [tensor.h](tensor_8h.md) | Provides APIs for creating and modifying tensor information.
File to Include: | -| [types.h](types_8h.md) | Provides the model file types and device types supported by MindSpore Lite.
File to Include: | +| Name | Description | +| ------------------------------- | ------------------------------------------------------------ | +| [context.h](context_8h.md) | Provides **Context** APIs for configuring runtime information.
File to include: \| +| [data_type.h](data__type_8h.md) | Declares tensor data types.
File to include: \| +| [format.h](format_8h.md) | Declares tensor data formats.
File to include: \ | +| [model.h](model_8h.md) | Provides model-related APIs for model creation and inference.
File to include: \| +| [status.h](status_8h.md) | Provides the status codes of MindSpore Lite.
File to include: \| +| [tensor.h](tensor_8h.md) | Provides APIs for creating and modifying tensor information.
File to include: \| +| [types.h](types_8h.md) | Provides the model file types and device types supported by MindSpore Lite.
File to include: \| ### Structs -| Name | Description | -| -------- | -------- | -| [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) | Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length. | -| [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) | Defines dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**. | -| [OH_AI_CallBackParam](_o_h___a_i___call_back_param.md) | Defines the operator information passed in a callback. | +| Name | Description | +| ------------------------------------------------------------ | ---------------------------------------------------- | +| [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) | Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length.| +| [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) | Dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**. | +| [OH_AI_CallBackParam](_o_h___a_i___call_back_param.md) | Defines the operator information passed in a callback. | -### Macros - -| Name | Description | -| -------- | -------- | -| [OH_AI_MAX_SHAPE_NUM](#oh_ai_max_shape_num) 32 | Defines dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**. | +### Macro Definition +| Name | Description | +| ------------------------------------------------ | -------------------------------------------- | +| [OH_AI_MAX_SHAPE_NUM](#oh_ai_max_shape_num) 32 | Defines dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**.| ### Types -| Name | Description | -| -------- | -------- | -| [OH_AI_ContextHandle](#oh_ai_contexthandle) | Defines the pointer to the MindSpore context. | -| [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) | Defines the pointer to the MindSpore device information. | -| [OH_AI_DataType](#oh_ai_datatype) | Declares data types supported by MSTensor. | -| [OH_AI_Format](#oh_ai_format) | Declares data formats supported by MSTensor. | -| [OH_AI_ModelHandle](#oh_ai_modelhandle) | Defines the pointer to a model object. | -| [OH_AI_TensorHandleArray](#oh_ai_tensorhandlearray) | Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length. | -| **OH_AI_ShapeInfo** | Defines dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**. | -| [OH_AI_CallBackParam](#oh_ai_callbackparam) | Defines the operator information passed in a callback. | -| [OH_AI_KernelCallBack](#oh_ai_kernelcallback)) (const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) inputs, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) outputs, const [OH_AI_CallBackParam](_o_h___a_i___call_back_param.md) kernel_Info) | Defines the pointer to a callback. | -| [OH_AI_Status](#oh_ai_status) | Defines MindSpore status codes. | -| [OH_AI_TensorHandle](#oh_ai_tensorhandle) | Defines the handle of a tensor object. | -| [OH_AI_ModelType](#oh_ai_modeltype) | Defines model file types. | -| [OH_AI_DeviceType](#oh_ai_devicetype) | Defines the supported device types. | +| Name | Description | +| ------------------------------------------------------------ | -------------------------------------------------- | +| [OH_AI_ContextHandle](#oh_ai_contexthandle) | Defines the pointer to the MindSpore context. | +| [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) | Defines the pointer to the MindSpore device information. | +| [OH_AI_DataType](#oh_ai_datatype) | Declares data types supported by MSTensor. | +| [OH_AI_Format](#oh_ai_format) | Declares data formats supported by MSTensor. | +| [OH_AI_ModelHandle](#oh_ai_modelhandle) | Defines the pointer to a model object. | +| [OH_AI_TensorHandleArray](#oh_ai_tensorhandlearray) | Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length.| +| [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) | Defines dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**. | +| [OH_AI_CallBackParam](#oh_ai_callbackparam) | Defines the operator information passed in a callback. | +| [OH_AI_KernelCallBack](#oh_ai_kernelcallback) | Defines the pointer to a callback. | +| [OH_AI_Status](#oh_ai_status) | Defines MindSpore status codes. | +| [OH_AI_TensorHandle](#oh_ai_tensorhandle) | Defines the handle of a tensor object. | +| [OH_AI_ModelType](#oh_ai_modeltype) | Defines model file types. | +| [OH_AI_DeviceType](#oh_ai_devicetype) | Defines the supported device types. | +| [OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype) | Defines NNRt device types. | +| [OH_AI_PerformanceMode](#oh_ai_performancemode) | Defines performance modes of the NNRt device. | +| [OH_AI_Priority](#oh_ai_priority) | Defines NNRt inference task priorities. | +| [NNRTDeviceDesc](#nnrtdevicedesc) | Defines the NNRt device information, including the device ID and device name. | ### Enums -| Name | Description | -| -------- | -------- | -| [OH_AI_DataType](#oh_ai_datatype) {
OH_AI_DATATYPE_UNKNOWN = 0, OH_AI_DATATYPE_OBJECTTYPE_STRING = 12, OH_AI_DATATYPE_OBJECTTYPE_LIST = 13, OH_AI_DATATYPE_OBJECTTYPE_TUPLE = 14,
OH_AI_DATATYPE_OBJECTTYPE_TENSOR = 17, OH_AI_DATATYPE_NUMBERTYPE_BEGIN = 29, OH_AI_DATATYPE_NUMBERTYPE_BOOL = 30, OH_AI_DATATYPE_NUMBERTYPE_INT8 = 32,
OH_AI_DATATYPE_NUMBERTYPE_INT16 = 33, OH_AI_DATATYPE_NUMBERTYPE_INT32 = 34, OH_AI_DATATYPE_NUMBERTYPE_INT64 = 35, OH_AI_DATATYPE_NUMBERTYPE_UINT8 = 37,
OH_AI_DATATYPE_NUMBERTYPE_UINT16 = 38, OH_AI_DATATYPE_NUMBERTYPE_UINT32 = 39, OH_AI_DATATYPE_NUMBERTYPE_UINT64 = 40, OH_AI_DATATYPE_NUMBERTYPE_FLOAT16 = 42,
OH_AI_DATATYPE_NUMBERTYPE_FLOAT32 = 43, OH_AI_DATATYPE_NUMBERTYPE_FLOAT64 = 44, OH_AI_DATATYPE_NUMBERTYPE_END = 46, OH_AI_DataTypeInvalid = INT32_MAX
} | Declares data types supported by MSTensor. | -| [OH_AI_Format](#oh_ai_format) {
OH_AI_FORMAT_NCHW = 0, OH_AI_FORMAT_NHWC = 1, OH_AI_FORMAT_NHWC4 = 2, OH_AI_FORMAT_HWKC = 3,
OH_AI_FORMAT_HWCK = 4, OH_AI_FORMAT_KCHW = 5, OH_AI_FORMAT_CKHW = 6, OH_AI_FORMAT_KHWC = 7,
OH_AI_FORMAT_CHWK = 8, OH_AI_FORMAT_HW = 9, OH_AI_FORMAT_HW4 = 10, OH_AI_FORMAT_NC = 11,
OH_AI_FORMAT_NC4 = 12, OH_AI_FORMAT_NC4HW4 = 13, OH_AI_FORMAT_NCDHW = 15, OH_AI_FORMAT_NWC = 16,
OH_AI_FORMAT_NCW = 17
} | Declares data formats supported by MSTensor. | -| [OH_AI_CompCode](#oh_ai_compcode) { OH_AI_COMPCODE_CORE = 0x00000000u, OH_AI_COMPCODE_LITE = 0xF0000000u } | Defines MinSpore component codes. | -| [OH_AI_Status](#oh_ai_status) {
OH_AI_STATUS_SUCCESS = 0, OH_AI_STATUS_CORE_FAILED = OH_AI_COMPCODE_CORE \| 0x1, OH_AI_STATUS_LITE_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -1), OH_AI_STATUS_LITE_NULLPTR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -2),
OH_AI_STATUS_LITE_PARAM_INVALID = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -3), OH_AI_STATUS_LITE_NO_CHANGE = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -4), OH_AI_STATUS_LITE_SUCCESS_EXIT = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -5), OH_AI_STATUS_LITE_MEMORY_FAILED = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -6),
OH_AI_STATUS_LITE_NOT_SUPPORT = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -7), OH_AI_STATUS_LITE_THREADPOOL_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -8), OH_AI_STATUS_LITE_UNINITIALIZED_OBJ = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -9), OH_AI_STATUS_LITE_OUT_OF_TENSOR_RANGE = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -100),
OH_AI_STATUS_LITE_INPUT_TENSOR_ERROR, OH_AI_STATUS_LITE_REENTRANT_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -102), OH_AI_STATUS_LITE_GRAPH_FILE_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -200), OH_AI_STATUS_LITE_NOT_FIND_OP = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -300),
OH_AI_STATUS_LITE_INVALID_OP_NAME = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -301), OH_AI_STATUS_LITE_INVALID_OP_ATTR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -302), OH_AI_STATUS_LITE_OP_EXECUTE_FAILURE, OH_AI_STATUS_LITE_FORMAT_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -400),
OH_AI_STATUS_LITE_INFER_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -500), OH_AI_STATUS_LITE_INFER_INVALID, OH_AI_STATUS_LITE_INPUT_PARAM_INVALID
} | Defines MindSpore status codes. | -| [OH_AI_ModelType](#oh_ai_modeltype) { OH_AI_MODELTYPE_MINDIR = 0, OH_AI_MODELTYPE_INVALID = 0xFFFFFFFF } | Defines model file types. | -| [OH_AI_DeviceType](#oh_ai_devicetype) {
OH_AI_DEVICETYPE_CPU = 0, OH_AI_DEVICETYPE_GPU, OH_AI_DEVICETYPE_KIRIN_NPU, OH_AI_DEVICETYPE_NNRT = 60,
OH_AI_DEVICETYPE_INVALID = 100
} | Defines the supported device types. | +| Name | Description | +| ------------------------------------------------------------ | ---------------------------------------- | +| [OH_AI_DataType](#oh_ai_datatype-1) {
OH_AI_DATATYPE_UNKNOWN = 0,
OH_AI_DATATYPE_OBJECTTYPE_STRING = 12,
OH_AI_DATATYPE_OBJECTTYPE_LIST = 13,
OH_AI_DATATYPE_OBJECTTYPE_TUPLE = 14,
OH_AI_DATATYPE_OBJECTTYPE_TENSOR = 17,
OH_AI_DATATYPE_NUMBERTYPE_BEGIN = 29,
OH_AI_DATATYPE_NUMBERTYPE_BOOL = 30,
OH_AI_DATATYPE_NUMBERTYPE_INT8 = 32,
OH_AI_DATATYPE_NUMBERTYPE_INT16 = 33,
OH_AI_DATATYPE_NUMBERTYPE_INT32 = 34,
OH_AI_DATATYPE_NUMBERTYPE_INT64 = 35,
OH_AI_DATATYPE_NUMBERTYPE_UINT8 = 37,
OH_AI_DATATYPE_NUMBERTYPE_UINT16 = 38,
OH_AI_DATATYPE_NUMBERTYPE_UINT32 = 39,
OH_AI_DATATYPE_NUMBERTYPE_UINT64 = 40,
OH_AI_DATATYPE_NUMBERTYPE_FLOAT16 = 42,
OH_AI_DATATYPE_NUMBERTYPE_FLOAT32 = 43,
OH_AI_DATATYPE_NUMBERTYPE_FLOAT64 = 44,
OH_AI_DATATYPE_NUMBERTYPE_END = 46,
OH_AI_DataTypeInvalid = INT32_MAX
} | Declares data types supported by MSTensor. | +| [OH_AI_Format](#oh_ai_format-1) {
OH_AI_FORMAT_NCHW = 0,
OH_AI_FORMAT_NHWC = 1,
OH_AI_FORMAT_NHWC4 = 2,
OH_AI_FORMAT_HWKC = 3,
OH_AI_FORMAT_HWCK = 4,
OH_AI_FORMAT_KCHW = 5,
OH_AI_FORMAT_CKHW = 6,
OH_AI_FORMAT_KHWC = 7,
OH_AI_FORMAT_CHWK = 8,
OH_AI_FORMAT_HW = 9,
OH_AI_FORMAT_HW4 = 10,
OH_AI_FORMAT_NC = 11,
OH_AI_FORMAT_NC4 = 12,
OH_AI_FORMAT_NC4HW4 = 13,
OH_AI_FORMAT_NCDHW = 15,
OH_AI_FORMAT_NWC = 16,
OH_AI_FORMAT_NCW = 17
} | Declares data formats supported by MSTensor. | +| [OH_AI_CompCode](#oh_ai_compcode) {
OH_AI_COMPCODE_CORE = 0x00000000u,
OH_AI_COMPCODE_LITE = 0xF0000000u
} | Defines MindSpore component codes. | +| [OH_AI_Status](#oh_ai_status-1) {
OH_AI_STATUS_SUCCESS = 0, OH_AI_STATUS_CORE_FAILED = OH_AI_COMPCODE_CORE \| 0x1, OH_AI_STATUS_LITE_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -1), OH_AI_STATUS_LITE_NULLPTR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -2),
OH_AI_STATUS_LITE_PARAM_INVALID = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -3), OH_AI_STATUS_LITE_NO_CHANGE = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -4), OH_AI_STATUS_LITE_SUCCESS_EXIT = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -5), OH_AI_STATUS_LITE_MEMORY_FAILED = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -6),
OH_AI_STATUS_LITE_NOT_SUPPORT = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -7), OH_AI_STATUS_LITE_THREADPOOL_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -8), OH_AI_STATUS_LITE_UNINITIALIZED_OBJ = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -9), OH_AI_STATUS_LITE_OUT_OF_TENSOR_RANGE = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -100),
OH_AI_STATUS_LITE_INPUT_TENSOR_ERROR, OH_AI_STATUS_LITE_REENTRANT_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -102), OH_AI_STATUS_LITE_GRAPH_FILE_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -200), OH_AI_STATUS_LITE_NOT_FIND_OP = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -300),
OH_AI_STATUS_LITE_INVALID_OP_NAME = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -301), OH_AI_STATUS_LITE_INVALID_OP_ATTR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -302), OH_AI_STATUS_LITE_OP_EXECUTE_FAILURE, OH_AI_STATUS_LITE_FORMAT_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -400),
OH_AI_STATUS_LITE_INFER_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF & -500), OH_AI_STATUS_LITE_INFER_INVALID, OH_AI_STATUS_LITE_INPUT_PARAM_INVALID
} | Defines MindSpore status codes. | +| [OH_AI_ModelType](#oh_ai_modeltype-1) {
OH_AI_MODELTYPE_MINDIR = 0,
OH_AI_MODELTYPE_INVALID = 0xFFFFFFFF
} | Defines model file types. | +| [OH_AI_DeviceType](#oh_ai_devicetype-1) {
OH_AI_DEVICETYPE_CPU = 0,
OH_AI_DEVICETYPE_GPU,
OH_AI_DEVICETYPE_KIRIN_NPU,
OH_AI_DEVICETYPE_NNRT = 60,
OH_AI_DEVICETYPE_INVALID = 100
} | Defines the supported device types.| +| [OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype-1) {
OH_AI_NNRTDEVICE_OTHERS = 0,
OH_AI_NNRTDEVICE_CPU = 1,
OH_AI_NNRTDEVICE_GPU = 2,
OH_AI_NNRTDEVICE_ACCELERATOR = 3
} | Defines NNRt device types. | +| [OH_AI_PerformanceMode](#oh_ai_performancemode-1) {
OH_AI_PERFORMANCE_NONE = 0,
OH_AI_PERFORMANCE_LOW = 1,
OH_AI_PERFORMANCE_MEDIUM = 2,
OH_AI_PERFORMANCE_HIGH = 3,
OH_AI_PERFORMANCE_EXTREME = 4
} | Defines performance modes of the NNRt device. | +| [OH_AI_Priority](#oh_ai_priority-1) {
OH_AI_PRIORITY_NONE = 0,
OH_AI_PRIORITY_LOW = 1,
OH_AI_PRIORITY_MEDIUM = 2,
OH_AI_PRIORITY_HIGH = 3
} | Defines NNRt inference task priorities. | ### Functions -| Name | Description | -| -------- | -------- | -| [OH_AI_ContextCreate](#oh_ai_contextcreate) () | Creates a context object. | -| [OH_AI_ContextDestroy](#oh_ai_contextdestroy) ([OH_AI_ContextHandle](#oh_ai_contexthandle) \*context) | Destroys a context object. | -| [OH_AI_ContextSetThreadNum](#oh_ai_contextsetthreadnum) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, int32_t thread_num) | Sets the number of runtime threads. | -| [OH_AI_ContextGetThreadNum](#oh_ai_contextgetthreadnum) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Obtains the number of threads. | -| [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, int mode) | Sets the affinity mode for binding runtime threads to CPU cores, which are categorized into little cores and big cores depending on the CPU frequency. | -| [OH_AI_ContextGetThreadAffinityMode](#oh_ai_contextgetthreadaffinitymode) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Obtains the affinity mode for binding runtime threads to CPU cores. | -| [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, const int32_t \*core_list, size_t core_num) | Sets the list of CPU cores bound to a runtime thread. | -| [OH_AI_ContextGetThreadAffinityCoreList](#oh_ai_contextgetthreadaffinitycorelist) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context, size_t \*core_num) | Obtains the list of bound CPU cores. | -| [OH_AI_ContextSetEnableParallel](#oh_ai_contextsetenableparallel) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, bool is_parallel) | Sets whether to enable parallelism between operators. | -| [OH_AI_ContextGetEnableParallel](#oh_ai_contextgetenableparallel) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Checks whether parallelism between operators is supported. | -| [OH_AI_ContextAddDeviceInfo](#oh_ai_contextadddeviceinfo) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Adds information about a running device. | -| [OH_AI_DeviceInfoCreate](#oh_ai_deviceinfocreate) ([OH_AI_DeviceType](#oh_ai_devicetype) device_type) | Creates a device information object. | -| [OH_AI_DeviceInfoDestroy](#oh_ai_deviceinfodestroy) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) \*device_info) | Destroys a device information instance. | -| [OH_AI_DeviceInfoSetProvider](#oh_ai_deviceinfosetprovider) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, const char \*provider) | Sets the name of a provider. | -| [OH_AI_DeviceInfoGetProvider](#oh_ai_deviceinfogetprovider) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the provider name. | -| [OH_AI_DeviceInfoSetProviderDevice](#oh_ai_deviceinfosetproviderdevice) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, const char \*device) | Sets the name of a provider device. | -| [OH_AI_DeviceInfoGetProviderDevice](#oh_ai_deviceinfogetproviderdevice) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the name of a provider device. | -| [OH_AI_DeviceInfoGetDeviceType](#oh_ai_deviceinfogetdevicetype) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the type of a provider device. | -| [OH_AI_DeviceInfoSetEnableFP16](#oh_ai_deviceinfosetenablefp16) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, bool is_fp16) | Sets whether to enable float16 inference. This function is available only for CPU/GPU devices. | -| [OH_AI_DeviceInfoGetEnableFP16](#oh_ai_deviceinfogetenablefp16) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Checks whether float16 inference is enabled. This function is available only for CPU/GPU devices. | -| [OH_AI_DeviceInfoSetFrequency](#oh_ai_deviceinfosetfrequency) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, int frequency) | Sets the NPU frequency type. This function is available only for NPU devices. | -| [OH_AI_DeviceInfoGetFrequency](#oh_ai_deviceinfogetfrequency) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the NPU frequency type. This function is available only for NPU devices. | -| [OH_AI_ModelCreate](#oh_ai_modelcreate) () | Creates a model object. | -| [OH_AI_ModelDestroy](#oh_ai_modeldestroy) ([OH_AI_ModelHandle](#oh_ai_modelhandle) \*model) | Destroys a model object. | -| [OH_AI_ModelBuild](#oh_ai_modelbuild) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const void \*model_data, size_t data_size, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context) | Loads and builds a MindSpore model from the memory buffer. | -| [OH_AI_ModelBuildFromFile](#oh_ai_modelbuildfromfile) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*model_path, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context) | Loads and builds a MindSpore model from a model file. | -| [OH_AI_ModelResize](#oh_ai_modelresize) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) inputs, [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) \*shape_infos, size_t shape_info_num) | Adjusts the input tensor shapes of a built model. | -| [OH_AI_ModelPredict](#oh_ai_modelpredict) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) inputs, [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) \*outputs, const [OH_AI_KernelCallBack](#oh_ai_kernelcallback) before, const [OH_AI_KernelCallBack](#oh_ai_kernelcallback) after) | Performs model inference. | -| [OH_AI_ModelGetInputs](#oh_ai_modelgetinputs) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains the input tensor array structure of a model. | -| [OH_AI_ModelGetOutputs](#oh_ai_modelgetoutputs) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains the output tensor array structure of a model. | -| [OH_AI_ModelGetInputByTensorName](#oh_ai_modelgetinputbytensorname) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*tensor_name) | Obtains the input tensor of a model by tensor name. | -| [OH_AI_ModelGetOutputByTensorName](#oh_ai_modelgetoutputbytensorname) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*tensor_name) | Obtains the output tensor of a model by tensor name. | -| [OH_AI_TensorCreate](#oh_ai_tensorcreate) (const char \*name, [OH_AI_DataType](#oh_ai_datatype) type, const int64_t \*shape, size_t shape_num, const void \*data, size_t data_len) | Creates a tensor object. | -| [OH_AI_TensorDestroy](#oh_ai_tensordestroy) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) \*tensor) | Destroys a tensor object. | -| [OH_AI_TensorClone](#oh_ai_tensorclone) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Clones a tensor. | -| [OH_AI_TensorSetName](#oh_ai_tensorsetname) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, const char \*name) | Sets the name of a tensor. | -| [OH_AI_TensorGetName](#oh_ai_tensorgetname) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the name of a tensor. | -| [OH_AI_TensorSetDataType](#oh_ai_tensorsetdatatype) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, [OH_AI_DataType](#oh_ai_datatype) type) | Sets the data type of a tensor. | -| [OH_AI_TensorGetDataType](#oh_ai_tensorgetdatatype) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the data type of a tensor. | -| [OH_AI_TensorSetShape](#oh_ai_tensorsetshape) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, const int64_t \*shape, size_t shape_num) | Sets the shape of a tensor. | -| [OH_AI_TensorGetShape](#oh_ai_tensorgetshape) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, size_t \*shape_num) | Obtains the shape of a tensor. | -| [OH_AI_TensorSetFormat](#oh_ai_tensorsetformat) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, [OH_AI_Format](#oh_ai_format) format) | Sets the tensor data format. | -| [OH_AI_TensorGetFormat](#oh_ai_tensorgetformat) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the tensor data format. | -| [OH_AI_TensorSetData](#oh_ai_tensorsetdata) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, void \*data) | Sets the tensor data. | -| [OH_AI_TensorGetData](#oh_ai_tensorgetdata) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the pointer to tensor data. | -| [OH_AI_TensorGetMutableData](#oh_ai_tensorgetmutabledata) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the pointer to variable tensor data. If the data is empty, memory will be allocated. | -| [OH_AI_TensorGetElementNum](#oh_ai_tensorgetelementnum) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the number of tensor elements. | -| [OH_AI_TensorGetDataSize](#oh_ai_tensorgetdatasize) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the number of bytes of the tensor data. | +| Name | Description | +| ------------------------------------------------------------ | ------------------------------------------------------------ | +| [OH_AI_ContextCreate](#oh_ai_contextcreate) () | Creates a context object. | +| [OH_AI_ContextDestroy](#oh_ai_contextdestroy) ([OH_AI_ContextHandle](#oh_ai_contexthandle) \*context) | Destroys a context object. | +| [OH_AI_ContextSetThreadNum](#oh_ai_contextsetthreadnum) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, int32_t thread_num) | Sets the number of runtime threads. | +| [OH_AI_ContextGetThreadNum](#oh_ai_contextgetthreadnum) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Obtains the number of threads. | +| [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, int mode) | Sets the affinity mode for binding runtime threads to CPU cores, which are classified into large, medium, and small cores based on the CPU frequency. You only need to bind the large or medium cores, but not small cores.| +| [OH_AI_ContextGetThreadAffinityMode](#oh_ai_contextgetthreadaffinitymode) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Obtains the affinity mode for binding runtime threads to CPU cores. | +| [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, const int32_t \*core_list, size_t core_num) | Sets the list of CPU cores bound to a runtime thread. | +| [OH_AI_ContextGetThreadAffinityCoreList](#oh_ai_contextgetthreadaffinitycorelist) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context, size_t \*core_num) | Obtains the list of bound CPU cores. | +| [OH_AI_ContextSetEnableParallel](#oh_ai_contextsetenableparallel) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, bool is_parallel) | Sets whether to enable parallelism between operators. The setting is ineffective because the feature of this API is not yet available. | +| [OH_AI_ContextGetEnableParallel](#oh_ai_contextgetenableparallel) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Checks whether parallelism between operators is supported. | +| [OH_AI_ContextAddDeviceInfo](#oh_ai_contextadddeviceinfo) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Attaches the custom device information to the inference context. | +| [OH_AI_DeviceInfoCreate](#oh_ai_deviceinfocreate) ([OH_AI_DeviceType](#oh_ai_devicetype) device_type) | Creates a device information object. | +| [OH_AI_DeviceInfoDestroy](#oh_ai_deviceinfodestroy) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) \*device_info) | Destroys a device information object. Note: After the device information instance is added to the context, the caller does not need to destroy it manually. | +| [OH_AI_DeviceInfoSetProvider](#oh_ai_deviceinfosetprovider) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, const char \*provider) | Sets the provider name. | +| [OH_AI_DeviceInfoGetProvider](#oh_ai_deviceinfogetprovider) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the provider name. | +| [OH_AI_DeviceInfoSetProviderDevice](#oh_ai_deviceinfosetproviderdevice) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, const char \*device) | Sets the name of a provider device. | +| [OH_AI_DeviceInfoGetProviderDevice](#oh_ai_deviceinfogetproviderdevice) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the name of a provider device. | +| [OH_AI_DeviceInfoGetDeviceType](#oh_ai_deviceinfogetdevicetype) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the device type. | +| [OH_AI_DeviceInfoSetEnableFP16](#oh_ai_deviceinfosetenablefp16) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, bool is_fp16) | Sets whether to enable float16 inference. This function is available only for CPU and GPU devices. | +| [OH_AI_DeviceInfoGetEnableFP16](#oh_ai_deviceinfogetenablefp16) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Checks whether float16 inference is enabled. This function is available only for CPU and GPU devices. | +| [OH_AI_DeviceInfoSetFrequency](#oh_ai_deviceinfosetfrequency) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, int frequency) | Sets the NPU frequency type. This function is available only for NPU devices. | +| [OH_AI_DeviceInfoGetFrequency](#oh_ai_deviceinfogetfrequency) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the NPU frequency type. This function is available only for NPU devices. | +| [OH_AI_GetAllNNRTDeviceDescs](#oh_ai_getallnnrtdevicedescs) (size_t \*num) | Obtains the descriptions of all NNRt devices in the system. | +| [OH_AI_DestroyAllNNRTDeviceDescs](#oh_ai_destroyallnnrtdevicedescs) ([NNRTDeviceDesc](#nnrtdevicedesc) \*\*desc) | Destroys the array of NNRT descriptions obtained by [OH_AI_GetAllNNRTDeviceDescs](#oh_ai_getallnnrtdevicedescs).| +| [OH_AI_GetDeviceIdFromNNRTDeviceDesc](#oh_ai_getdeviceidfromnnrtdevicedesc) (const [NNRTDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device ID from the specified NNRt device description. Note that this ID is valid only for NNRt devices.| +| [OH_AI_GetNameFromNNRTDeviceDesc](#oh_ai_getnamefromnnrtdevicedesc) (const [NNRTDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device name from the specified NNRt device description. | +| [OH_AI_GetTypeFromNNRTDeviceDesc](#oh_ai_gettypefromnnrtdevicedesc) (const [NNRTDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device type from the specified NNRt device description. | +| [OH_AI_CreateNNRTDeviceInfoByName](#oh_ai_creatennrtdeviceinfobyname) (const char \*name) | Searches for the NNRt device with the specified name and creates the NNRt device information based on the information about the first found NNRt device.| +| [OH_AI_CreateNNRTDeviceInfoByType](#oh_ai_creatennrtdeviceinfobytype) ([OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype) type) | Searches for the NNRt device with the specified type and creates the NNRt device information based on the information about the first found NNRt device.| +| [OH_AI_DeviceInfoSetDeviceId](#oh_ai_deviceinfosetdeviceid) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, size_t device_id) | Sets the ID of an NNRt device. This API is available only for NNRt devices. | +| [OH_AI_DeviceInfoGetDeviceId](#oh_ai_deviceinfogetdeviceid) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the ID of an NNRt device. This API is available only for NNRt devices. | +| [OH_AI_DeviceInfoSetPerformanceMode](#oh_ai_deviceinfosetperformancemode) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, [OH_AI_PerformanceMode](#oh_ai_performancemode) mode) | Sets the performance mode of an NNRt device. This API is available only for NNRt devices. | +| [OH_AI_DeviceInfoGetPerformanceMode](#oh_ai_deviceinfogetperformancemode) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the performance mode of an NNRt device. This API is available only for NNRt devices. | +| [OH_AI_DeviceInfoSetPriority](#oh_ai_deviceinfosetpriority) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, [OH_AI_Priority](#oh_ai_priority) priority) | Sets the priority of an NNRT task. This API is available only for NNRt devices. | +| [OH_AI_DeviceInfoGetPriority](#oh_ai_deviceinfogetpriority) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the priority of an NNRT task. This API is available only for NNRt devices. | +| [OH_AI_ModelCreate](#oh_ai_modelcreate) () | Creates a model object. | +| [OH_AI_ModelDestroy](#oh_ai_modeldestroy) ([OH_AI_ModelHandle](#oh_ai_modelhandle) \*model) | Destroys a model object. | +| [OH_AI_ModelBuild](#oh_ai_modelbuild) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const void \*model_data, size_t data_size, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context) | Loads and builds a MindSpore model from the memory buffer. | +| [OH_AI_ModelBuildFromFile](#oh_ai_modelbuildfromfile) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*model_path, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context) | Loads and builds a MindSpore model from a model file. | +| [OH_AI_ModelResize](#oh_ai_modelresize) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) inputs, [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) \*shape_infos, size_t shape_info_num) | Adjusts the input tensor shapes of a built model. | +| [OH_AI_ModelPredict](#oh_ai_modelpredict) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) inputs, [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) \*outputs, const [OH_AI_KernelCallBack](#oh_ai_kernelcallback) before, const [OH_AI_KernelCallBack](#oh_ai_kernelcallback) after) | Performs model inference. | +| [OH_AI_ModelGetInputs](#oh_ai_modelgetinputs) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains the input tensor array structure of a model. | +| [OH_AI_ModelGetOutputs](#oh_ai_modelgetoutputs) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains the output tensor array structure of a model. | +| [OH_AI_ModelGetInputByTensorName](#oh_ai_modelgetinputbytensorname) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*tensor_name) | Obtains the input tensor of a model by tensor name. | +| [OH_AI_ModelGetOutputByTensorName](#oh_ai_modelgetoutputbytensorname) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*tensor_name) | Obtains the output tensor of a model by tensor name. | +| [OH_AI_TensorCreate](#oh_ai_tensorcreate) (const char \*name, [OH_AI_DataType](#oh_ai_datatype) type, const int64_t \*shape, size_t shape_num, const void \*data, size_t data_len) | Creates a tensor object. | +| [OH_AI_TensorDestroy](#oh_ai_tensordestroy) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) \*tensor) | Destroys a tensor object. | +| [OH_AI_TensorClone](#oh_ai_tensorclone) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Deeply copies a tensor. | +| [OH_AI_TensorSetName](#oh_ai_tensorsetname) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, const char \*name) | Sets the tensor name. | +| [OH_AI_TensorGetName](#oh_ai_tensorgetname) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the tensor name. | +| [OH_AI_TensorSetDataType](#oh_ai_tensorsetdatatype) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, [OH_AI_DataType](#oh_ai_datatype) type) | Sets the data type of a tensor. | +| [OH_AI_TensorGetDataType](#oh_ai_tensorgetdatatype) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the tensor type. | +| [OH_AI_TensorSetShape](#oh_ai_tensorsetshape) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, const int64_t \*shape, size_t shape_num) | Sets the tensor shape. | +| [OH_AI_TensorGetShape](#oh_ai_tensorgetshape) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, size_t \*shape_num) | Obtains the tensor shape. | +| [OH_AI_TensorSetFormat](#oh_ai_tensorsetformat) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, [OH_AI_Format](#oh_ai_format) format) | Sets the tensor data format. | +| [OH_AI_TensorGetFormat](#oh_ai_tensorgetformat) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the tensor data format. | +| [OH_AI_TensorSetData](#oh_ai_tensorsetdata) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, void \*data) | Sets the tensor data. | +| [OH_AI_TensorGetData](#oh_ai_tensorgetdata) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the pointer to tensor data. | +| [OH_AI_TensorGetMutableData](#oh_ai_tensorgetmutabledata) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the pointer to variable tensor data. If the data is empty, memory will be allocated. | +| [OH_AI_TensorGetElementNum](#oh_ai_tensorgetelementnum) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the number of tensor elements. | +| [OH_AI_TensorGetDataSize](#oh_ai_tensorgetdatasize) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the number of bytes of the tensor data. | ## Macro Description @@ -133,84 +154,116 @@ Provides APIs related to MindSpore Lite model inference. ### OH_AI_MAX_SHAPE_NUM - + ``` #define OH_AI_MAX_SHAPE_NUM 32 ``` -**Description**
+ +**Description** + Defines dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**. ## Type Description +### NNRTDeviceDesc + + +``` +typedef struct NNRTDeviceDesc NNRTDeviceDesc +``` + +**Description** + +Defines the NNRt device information, including the device ID and device name. + +**Since** + +10 + + ### OH_AI_CallBackParam - + ``` -typedef struct OH_AI_CallBackParamOH_AI_CallBackParam +typedef struct OH_AI_CallBackParam OH_AI_CallBackParam ``` -**Description**
+ +**Description** + Defines the operator information passed in a callback. ### OH_AI_ContextHandle - + ``` typedef void* OH_AI_ContextHandle ``` -**Description**
-Defines the pointer to the MindSpore context. + +**Description** + +Defines the pointer to the MindSpore context. ### OH_AI_DataType - + ``` -typedef enum OH_AI_DataTypeOH_AI_DataType +typedef enum OH_AI_DataType OH_AI_DataType ``` -**Description**
+ +**Description** + Declares data types supported by MSTensor. ### OH_AI_DeviceInfoHandle - + ``` typedef void* OH_AI_DeviceInfoHandle ``` -**Description**
+ +**Description** + Defines the pointer to the MindSpore device information. ### OH_AI_DeviceType - + ``` -typedef enum OH_AI_DeviceTypeOH_AI_DeviceType +typedef enum OH_AI_DeviceType OH_AI_DeviceType ``` -**Description**
+ +**Description** + Defines the supported device types. ### OH_AI_Format - + ``` -typedef enum OH_AI_FormatOH_AI_Format +typedef enum OH_AI_Format OH_AI_Format ``` -**Description**
+ +**Description** + Declares data formats supported by MSTensor. ### OH_AI_KernelCallBack - + ``` typedef bool(* OH_AI_KernelCallBack) (const OH_AI_TensorHandleArray inputs, const OH_AI_TensorHandleArray outputs, const OH_AI_CallBackParam kernel_Info) ``` -**Description**
+ +**Description** + Defines the pointer to a callback. This pointer is used to set the two callback functions in [OH_AI_ModelPredict](#oh_ai_modelpredict). Each callback function must contain three parameters, where **inputs** and **outputs** indicate the input and output tensors of the operator, and **kernel_Info** indicates information about the current operator. You can use the callback functions to monitor the operator execution status, for example, operator execution time and the operator correctness. @@ -218,51 +271,109 @@ This pointer is used to set the two callback functions in [OH_AI_ModelPredict](# ### OH_AI_ModelHandle - + ``` typedef void* OH_AI_ModelHandle ``` -**Description**
+ +**Description** + Defines the pointer to a model object. ### OH_AI_ModelType - + ``` -typedef enum OH_AI_ModelTypeOH_AI_ModelType +typedef enum OH_AI_ModelType OH_AI_ModelType ``` -**Description**
+ +**Description** + Defines model file types. +### OH_AI_NNRTDeviceType + + +``` +typedef enum OH_AI_NNRTDeviceType OH_AI_NNRTDeviceType +``` + +**Description** + +Defines NNRt device types. + +**Since** + +10 + + +### OH_AI_PerformanceMode + + +``` +typedef enum OH_AI_PerformanceMode OH_AI_PerformanceMode +``` + +**Description** + +Defines performance modes of the NNRt device. + +**Since** + +10 + + +### OH_AI_Priority + + +``` +typedef enum OH_AI_Priority OH_AI_Priority +``` + +**Description** + +Defines NNRt inference task priorities. + +**Since** + +10 + + ### OH_AI_Status - + ``` -typedef enum OH_AI_StatusOH_AI_Status +typedef enum OH_AI_Status OH_AI_Status ``` -**Description**
+ +**Description** + Defines MindSpore status codes. ### OH_AI_TensorHandle - + ``` typedef void* OH_AI_TensorHandle ``` -**Description**
+ +**Description** + Defines the handle of a tensor object. ### OH_AI_TensorHandleArray - + ``` -typedef struct OH_AI_TensorHandleArrayOH_AI_TensorHandleArray +typedef struct OH_AI_TensorHandleArray OH_AI_TensorHandleArray ``` -**Description**
+ +**Description** + Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length. @@ -271,215 +382,304 @@ Defines the tensor array structure, which is used to store the tensor array poin ### OH_AI_CompCode - + ``` enum OH_AI_CompCode ``` -**Description**
-Defines MinSpore component codes. -| Name | Description | -| -------- | -------- | -| OH_AI_COMPCODE_CORE | MindSpore Core code | -| OH_AI_COMPCODE_LITE | MindSpore Lite code | +**Description** + +Defines MindSpore component codes. + +| Value | Description | +| ------------------- | --------------------- | +| OH_AI_COMPCODE_CORE | MindSpore Core code| +| OH_AI_COMPCODE_LITE | MindSpore Lite code| ### OH_AI_DataType - + ``` enum OH_AI_DataType ``` -**Description**
+ +**Description** + Declares data types supported by MSTensor. -| Name | Description | -| -------- | -------- | -| OH_AI_DATATYPE_UNKNOWN | Unknown data type | -| OH_AI_DATATYPE_OBJECTTYPE_STRING | String data type | -| OH_AI_DATATYPE_OBJECTTYPE_LIST | List data type | -| OH_AI_DATATYPE_OBJECTTYPE_TUPLE | Tuple data type | -| OH_AI_DATATYPE_OBJECTTYPE_TENSOR | TensorList data type | -| OH_AI_DATATYPE_NUMBERTYPE_BEGIN | Beginning of the number type | -| OH_AI_DATATYPE_NUMBERTYPE_BOOL | Bool data type | -| OH_AI_DATATYPE_NUMBERTYPE_INT8 | Int8 data type | -| OH_AI_DATATYPE_NUMBERTYPE_INT16 | Int16 data type | -| OH_AI_DATATYPE_NUMBERTYPE_INT32 | Int32 data type | -| OH_AI_DATATYPE_NUMBERTYPE_INT64 | Int64 data type | -| OH_AI_DATATYPE_NUMBERTYPE_UINT8 | UInt8 data type | -| OH_AI_DATATYPE_NUMBERTYPE_UINT16 | UInt16 data type | -| OH_AI_DATATYPE_NUMBERTYPE_UINT32 | UInt32 data type | -| OH_AI_DATATYPE_NUMBERTYPE_UINT64 | UInt64 data type | -| OH_AI_DATATYPE_NUMBERTYPE_FLOAT16 | Float16 data type | -| OH_AI_DATATYPE_NUMBERTYPE_FLOAT32 | Float32 data type | -| OH_AI_DATATYPE_NUMBERTYPE_FLOAT64 | Float64 data type | -| OH_AI_DATATYPE_NUMBERTYPE_END | End of the number type | -| OH_AI_DataTypeInvalid | Invalid data type | +| Value | Description | +| --------------------------------- | ---------------------- | +| OH_AI_DATATYPE_UNKNOWN | Unknown data type. | +| OH_AI_DATATYPE_OBJECTTYPE_STRING | String data. | +| OH_AI_DATATYPE_OBJECTTYPE_LIST | List data. | +| OH_AI_DATATYPE_OBJECTTYPE_TUPLE | Tuple data. | +| OH_AI_DATATYPE_OBJECTTYPE_TENSOR | TensorList data. | +| OH_AI_DATATYPE_NUMBERTYPE_BEGIN | Beginning of the number type. | +| OH_AI_DATATYPE_NUMBERTYPE_BOOL | Bool data. | +| OH_AI_DATATYPE_NUMBERTYPE_INT8 | Int8 data. | +| OH_AI_DATATYPE_NUMBERTYPE_INT16 | Int16 data. | +| OH_AI_DATATYPE_NUMBERTYPE_INT32 | Int32 data. | +| OH_AI_DATATYPE_NUMBERTYPE_INT64 | Int64 data. | +| OH_AI_DATATYPE_NUMBERTYPE_UINT8 | UInt8 data. | +| OH_AI_DATATYPE_NUMBERTYPE_UINT16 | UInt16 data . | +| OH_AI_DATATYPE_NUMBERTYPE_UINT32 | UInt32 data. | +| OH_AI_DATATYPE_NUMBERTYPE_UINT64 | UInt64 data. | +| OH_AI_DATATYPE_NUMBERTYPE_FLOAT16 | Float16 data. | +| OH_AI_DATATYPE_NUMBERTYPE_FLOAT32 | Float32 data. | +| OH_AI_DATATYPE_NUMBERTYPE_FLOAT64 | Float64 data. | +| OH_AI_DATATYPE_NUMBERTYPE_END | End of the number type.| +| OH_AI_DataTypeInvalid | Invalid data type. | ### OH_AI_DeviceType - + ``` enum OH_AI_DeviceType ``` -**Description**
+ +**Description** + Defines the supported device types. -| Name | Description | -| -------- | -------- | -| OH_AI_DEVICETYPE_CPU | Device type: CPU
since 9 | -| OH_AI_DEVICETYPE_GPU | Device type: GPU
Reserved, not support yet.
since 9 | -| OH_AI_DEVICETYPE_KIRIN_NPU | Device type: Kirin NPU
Reserved, not support yet.
since 9 | -| OH_AI_DEVICETYPE_NNRT | Device type: NNRt
OHOS-only device range[60,80).
since 9 | -| OH_AI_DEVICETYPE_INVALID | Invalid device type
since 9 | +| Value | Description | +| -------------------------- | --------------------------------------- | +| OH_AI_DEVICETYPE_CPU | Device type: CPU | +| OH_AI_DEVICETYPE_GPU | Device type: GPU Reserved | +| OH_AI_DEVICETYPE_KIRIN_NPU | Device type: Kirin NPU Reserved| +| OH_AI_DEVICETYPE_NNRT | Device type: NNRt OHOS device range: [60, 80)| +| OH_AI_DEVICETYPE_INVALID | Invalid device type | ### OH_AI_Format - + ``` enum OH_AI_Format ``` -**Description**
+ +**Description** + Declares data formats supported by MSTensor. -| Name | Description | -| -------- | -------- | -| OH_AI_FORMAT_NCHW | NCHW format | -| OH_AI_FORMAT_NHWC | NHWC format | -| OH_AI_FORMAT_NHWC4 | NHWC4 format | -| OH_AI_FORMAT_HWKC | HWKC format | -| OH_AI_FORMAT_HWCK | HWCK format | -| OH_AI_FORMAT_KCHW | KCHW format | -| OH_AI_FORMAT_CKHW | CKHW format | -| OH_AI_FORMAT_KHWC | KHWC format | -| OH_AI_FORMAT_CHWK | CHWK format | -| OH_AI_FORMAT_HW | HW format | -| OH_AI_FORMAT_HW4 | HW4 format | -| OH_AI_FORMAT_NC | NC format | -| OH_AI_FORMAT_NC4 | NC4 format | -| OH_AI_FORMAT_NC4HW4 | NC4HW4 format | -| OH_AI_FORMAT_NCDHW | NCDHW format | -| OH_AI_FORMAT_NWC | NWC format | -| OH_AI_FORMAT_NCW | NCW format | +| Value | Description | +| ------------------- | ---------------- | +| OH_AI_FORMAT_NCHW | Tensor data is stored in the sequence of batch number N, channel C, height H, and width W. | +| OH_AI_FORMAT_NHWC | Tensor data is stored in the sequence of batch number N, height H, width W, and channel C. | +| OH_AI_FORMAT_NHWC4 | Tensor data is stored in the sequence of batch number N, height H, width W, and channel C. The C axis is 4-byte aligned. | +| OH_AI_FORMAT_HWKC | Tensor data is stored in the sequence of height H, width W, core count K, and channel C. | +| OH_AI_FORMAT_HWCK | Tensor data is stored in the sequence of height H, width W, channel C, and core count K. | +| OH_AI_FORMAT_KCHW | Tensor data is stored in the sequence of core count K, channel C, height H, and width W. | +| OH_AI_FORMAT_CKHW | Tensor data is stored in the sequence of channel C, core count K, height H, and width W. | +| OH_AI_FORMAT_KHWC | Tensor data is stored in the sequence of core count K, height H, width W, and channel C. | +| OH_AI_FORMAT_CHWK | Tensor data is stored in the sequence of channel C, height H, width W, and core count K. | +| OH_AI_FORMAT_HW | Tensor data is stored in the sequence of height H and width W. | +| OH_AI_FORMAT_HW4 | Tensor data is stored in the sequence of height H and width W. The W axis is 4-byte aligned. | +| OH_AI_FORMAT_NC | Tensor data is stored in the sequence of batch number N and channel C. | +| OH_AI_FORMAT_NC4 | Tensor data is stored in the sequence of batch number N and channel C. The C axis is 4-byte aligned. | +| OH_AI_FORMAT_NC4HW4 | Tensor data is stored in the sequence of batch number N, channel C, height H, and width W. The C axis and W axis are 4-byte aligned.| +| OH_AI_FORMAT_NCDHW | Tensor data is stored in the sequence of batch number N, channel C, depth D, height H, and width W. | +| OH_AI_FORMAT_NWC | Tensor data is stored in the sequence of batch number N, width W, and channel C. | +| OH_AI_FORMAT_NCW | Tensor data is stored in the sequence of batch number N, channel C, and width W. | ### OH_AI_ModelType - + ``` enum OH_AI_ModelType ``` -**Description**
+ +**Description** + Defines model file types. -| Name | Description | -| -------- | -------- | -| OH_AI_MODELTYPE_MINDIR | Model type: MindIR
since 9 | -| OH_AI_MODELTYPE_INVALID | Invalid model type
since 9 | +| Value | Description | +| ----------------------- | ------------------ | +| OH_AI_MODELTYPE_MINDIR | Model type of MindIR. The extension of the model file name is **.ms**.| +| OH_AI_MODELTYPE_INVALID | Invalid model type. | + + +### OH_AI_NNRTDeviceType + + +``` +enum OH_AI_NNRTDeviceType +``` + +**Description** + +Defines NNRt device types. + +**Since** + +10 + +| Value | Description | +| ---------------------------- | ----------------------------------- | +| OH_AI_NNRTDEVICE_OTHERS | Others (any device type except the following three types).| +| OH_AI_NNRTDEVICE_CPU | CPU. | +| OH_AI_NNRTDEVICE_GPU | GPU. | +| OH_AI_NNRTDEVICE_ACCELERATOR | Specific acceleration device. | + + +### OH_AI_PerformanceMode + + +``` +enum OH_AI_PerformanceMode +``` + +**Description** + +Defines performance modes of the NNRt device. + +**Since** + +10 + +| Value | Description | +| ------------------------- | ------------------- | +| OH_AI_PERFORMANCE_NONE | No special settings. | +| OH_AI_PERFORMANCE_LOW | Low power consumption. | +| OH_AI_PERFORMANCE_MEDIUM | Power consumption and performance balancing.| +| OH_AI_PERFORMANCE_HIGH | High performance. | +| OH_AI_PERFORMANCE_EXTREME | Ultimate performance. | + + +### OH_AI_Priority + + +``` +enum OH_AI_Priority +``` + +**Description** + +Defines NNRt inference task priorities. + +**Since** + +10 + +| Value | Description | +| --------------------- | -------------- | +| OH_AI_PRIORITY_NONE | No priority preference.| +| OH_AI_PRIORITY_LOW | Low priority.| +| OH_AI_PRIORITY_MEDIUM | Medium priority| +| OH_AI_PRIORITY_HIGH | High priority. | ### OH_AI_Status - + ``` enum OH_AI_Status ``` -**Description**
-Defines MindSpore status codes. -| Name | Description | -| -------- | -------- | -| OH_AI_STATUS_SUCCESS | Success | -| OH_AI_STATUS_CORE_FAILED | MindSpore Core failure | -| OH_AI_STATUS_LITE_ERROR | MindSpore Lite error | -| OH_AI_STATUS_LITE_NULLPTR | MindSpore Lite null pointer | -| OH_AI_STATUS_LITE_PARAM_INVALID | MindSpore Lite invalid parameters | -| OH_AI_STATUS_LITE_NO_CHANGE | MindSpore Lite no change | -| OH_AI_STATUS_LITE_SUCCESS_EXIT | MindSpore Lite exit without errors | -| OH_AI_STATUS_LITE_MEMORY_FAILED | MindSpore Lite memory allocation failure | -| OH_AI_STATUS_LITE_NOT_SUPPORT | MindSpore Lite not supported | -| OH_AI_STATUS_LITE_THREADPOOL_ERROR | MindSpore Lite thread pool error | -| OH_AI_STATUS_LITE_UNINITIALIZED_OBJ | MindSpore Lite uninitialized | -| OH_AI_STATUS_LITE_OUT_OF_TENSOR_RANGE | MindSpore Lite tensor overflow | -| OH_AI_STATUS_LITE_INPUT_TENSOR_ERROR | MindSpore Lite input tensor error | -| OH_AI_STATUS_LITE_REENTRANT_ERROR | MindSpore Lite reentry error | -| OH_AI_STATUS_LITE_GRAPH_FILE_ERROR | MindSpore Lite file error | -| OH_AI_STATUS_LITE_NOT_FIND_OP | MindSpore Lite operator not found | -| OH_AI_STATUS_LITE_INVALID_OP_NAME | MindSpore Lite invalid operators | -| OH_AI_STATUS_LITE_INVALID_OP_ATTR | MindSpore Lite invalid operator hyperparameters | -| OH_AI_STATUS_LITE_OP_EXECUTE_FAILURE | MindSpore Lite operator execution failure | -| OH_AI_STATUS_LITE_FORMAT_ERROR | MindSpore Lite tensor format error | -| OH_AI_STATUS_LITE_INFER_ERROR | MindSpore Lite shape inference error | -| OH_AI_STATUS_LITE_INFER_INVALID | MindSpore Lite invalid shape inference | -| OH_AI_STATUS_LITE_INPUT_PARAM_INVALID | MindSpore Lite invalid input parameters | +**Description** + +Defines MindSpore status codes. +| Value | Description | +| ------------------------------------- | ----------------------------------------- | +| OH_AI_STATUS_SUCCESS | Success. | +| OH_AI_STATUS_CORE_FAILED | MindSpore Core failure. | +| OH_AI_STATUS_LITE_ERROR | MindSpore Lite error. | +| OH_AI_STATUS_LITE_NULLPTR | MindSpore Lite null pointer. | +| OH_AI_STATUS_LITE_PARAM_INVALID | MindSpore Lite invalid parameters. | +| OH_AI_STATUS_LITE_NO_CHANGE | MindSpore Lite no change. | +| OH_AI_STATUS_LITE_SUCCESS_EXIT | MindSpore Lite exit without errors.| +| OH_AI_STATUS_LITE_MEMORY_FAILED | MindSpore Lite memory allocation failure. | +| OH_AI_STATUS_LITE_NOT_SUPPORT | MindSpore Lite not supported. | +| OH_AI_STATUS_LITE_THREADPOOL_ERROR | MindSpore Lite thread pool error. | +| OH_AI_STATUS_LITE_UNINITIALIZED_OBJ | MindSpore Lite uninitialized. | +| OH_AI_STATUS_LITE_OUT_OF_TENSOR_RANGE | MindSpore Lite tensor overflow. | +| OH_AI_STATUS_LITE_INPUT_TENSOR_ERROR | MindSpore Lite input tensor error. | +| OH_AI_STATUS_LITE_REENTRANT_ERROR | MindSpore Lite reentry error. | +| OH_AI_STATUS_LITE_GRAPH_FILE_ERROR | MindSpore Lite file error. | +| OH_AI_STATUS_LITE_NOT_FIND_OP | MindSpore Lite operator not found. | +| OH_AI_STATUS_LITE_INVALID_OP_NAME | MindSpore Lite invalid operators. | +| OH_AI_STATUS_LITE_INVALID_OP_ATTR | MindSpore Lite invalid operator hyperparameters. | +| OH_AI_STATUS_LITE_OP_EXECUTE_FAILURE | MindSpore Lite operator execution failure. | +| OH_AI_STATUS_LITE_FORMAT_ERROR | MindSpore Lite tensor format error. | +| OH_AI_STATUS_LITE_INFER_ERROR | MindSpore Lite shape inference error. | +| OH_AI_STATUS_LITE_INFER_INVALID | MindSpore Lite invalid shape inference. | +| OH_AI_STATUS_LITE_INPUT_PARAM_INVALID | MindSpore Lite invalid input parameters.| ## Function Description ### OH_AI_ContextAddDeviceInfo() - + ``` OH_AI_API void OH_AI_ContextAddDeviceInfo (OH_AI_ContextHandle context, OH_AI_DeviceInfoHandle device_info ) ``` -**Description**
-Adds information about a running device. - **Parameters** +**Description** -| Name | Description | -| -------- | -------- | -| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance. | -| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance. | +Attaches the custom device information to the inference context. + +**Parameters** + +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.| +| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.| ### OH_AI_ContextCreate() - + ``` OH_AI_API OH_AI_ContextHandle OH_AI_ContextCreate () ``` -**Description**
+ +**Description** + Creates a context object. **Returns** -[OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context. +[OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance. ### OH_AI_ContextDestroy() - + ``` OH_AI_API void OH_AI_ContextDestroy (OH_AI_ContextHandle * context) ``` -**Description**
+ +**Description** + Destroys a context object. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| context | Level-2 pointer to [OH_AI_ContextHandle](#oh_ai_contexthandle). After the context is destroyed, the pointer is set to null. | +| Name | Description | +| ------- | ------------------------------------------------------------ | +| context | Level-2 pointer to [OH_AI_ContextHandle](#oh_ai_contexthandle). After the context is destroyed, the pointer is set to null. | ### OH_AI_ContextGetEnableParallel() - + ``` OH_AI_API bool OH_AI_ContextGetEnableParallel (const OH_AI_ContextHandle context) ``` -**Description**
+ +**Description** + Checks whether parallelism between operators is supported. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance. | +| Name | Description | +| ------- | ------------------------------------------------------------ | +| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.| **Returns** @@ -488,59 +688,65 @@ Whether parallelism between operators is supported. The value **true** means tha ### OH_AI_ContextGetThreadAffinityCoreList() - + ``` OH_AI_API const int32_t* OH_AI_ContextGetThreadAffinityCoreList (const OH_AI_ContextHandle context, size_t * core_num ) ``` -**Description**
+ +**Description** + Obtains the list of bound CPU cores. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance. | -| core_num | Number of CPU cores. | +| Name | Description | +| -------- | ------------------------------------------------------------ | +| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.| +| core_num | Number of CPU cores. | **Returns** -List of bound CPU cores. +Specifies the CPU core binding list. This list is managed by [OH_AI_ContextHandle](#oh_ai_contexthandle). The caller does not need to destroy it manually. ### OH_AI_ContextGetThreadAffinityMode() - + ``` OH_AI_API int OH_AI_ContextGetThreadAffinityMode (const OH_AI_ContextHandle context) ``` -**Description**
+ +**Description** + Obtains the affinity mode for binding runtime threads to CPU cores. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance. | +| Name | Description | +| ------- | ------------------------------------------------------------ | +| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.| **Returns** -Affinity mode. **0**: no affinities; **1**: big cores first; **2**: little cores first +Affinity mode. **0**: no affinities; **1**: big cores first; **2**: medium cores first ### OH_AI_ContextGetThreadNum() - + ``` OH_AI_API int32_t OH_AI_ContextGetThreadNum (const OH_AI_ContextHandle context) ``` -**Description**
+ +**Description** + Obtains the number of threads. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance. | +| Name | Description | +| ------- | ------------------------------------------------------------ | +| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.| **Returns** @@ -549,125 +755,239 @@ Number of threads. ### OH_AI_ContextSetEnableParallel() - + ``` OH_AI_API void OH_AI_ContextSetEnableParallel (OH_AI_ContextHandle context, bool is_parallel ) ``` -**Description**
-Sets whether to enable parallelism between operators. - **Parameters** +**Description** + +Sets whether to enable parallelism between operators. The setting is ineffective because the feature of this API is not yet available. -| Name | Description | -| -------- | -------- | -| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance. | -| is_parallel | Whether to enable parallelism between operators. The value **true** means to enable parallelism between operators, and the value **false** means the opposite. | +**Parameters** + +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.| +| is_parallel | Whether parallelism between operators is supported. The value **true** means that parallelism between operators is supported, and the value **false** means the opposite. | ### OH_AI_ContextSetThreadAffinityCoreList() - + ``` OH_AI_API void OH_AI_ContextSetThreadAffinityCoreList (OH_AI_ContextHandle context, const int32_t * core_list, size_t core_num ) ``` -**Description**
+ +**Description** + Sets the list of CPU cores bound to a runtime thread. For example, if **core_list** is set to **[2,6,8]**, threads run on the 2nd, 6th, and 8th CPU cores. If [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode) and [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist) are called for the same context object, the **core_list** parameter of [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist) takes effect, but the **mode** parameter of [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode) does not. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance. | -| core_list | List of bound CPU cores. | -| core_num | Number of cores, which indicates the length of **core_list**. | +| Name | Description | +| --------- | ------------------------------------------------------------ | +| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.| +| core_list | List of bound CPU cores. | +| core_num | Number of cores, which indicates the length of **core_list**. | ### OH_AI_ContextSetThreadAffinityMode() - + ``` OH_AI_API void OH_AI_ContextSetThreadAffinityMode (OH_AI_ContextHandle context, int mode ) ``` -**Description**
-Sets the affinity mode for binding runtime threads to CPU cores, which are categorized into little cores and big cores depending on the CPU frequency. - **Parameters** +**Description** + +Sets the affinity mode for binding runtime threads to CPU cores, which are classified into large, medium, and small cores based on the CPU frequency. You only need to bind the large or medium cores, but not small cores. + +**Parameters** -| Name | Description | -| -------- | -------- | -| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance. | -| mode | Affinity mode. **0**: no affinities; **1**: big cores first; **2**: little cores first | +| Name | Description | +| ------- | ------------------------------------------------------------ | +| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.| +| mode | Affinity mode. **0**: no affinities; **1**: big cores first; **2**: medium cores first| ### OH_AI_ContextSetThreadNum() - + ``` OH_AI_API void OH_AI_ContextSetThreadNum (OH_AI_ContextHandle context, int32_t thread_num ) ``` -**Description**
+ +**Description** + Sets the number of runtime threads. - **Parameters** +**Parameters** + +| Name | Description | +| ---------- | ------------------------------------------------------------ | +| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.| +| thread_num | Number of runtime threads. | + + +### OH_AI_CreateNNRTDeviceInfoByName() + + +``` +OH_AI_API OH_AI_DeviceInfoHandle OH_AI_CreateNNRTDeviceInfoByName (const char * name) +``` + +**Description** + +Searches for the NNRt device with the specified name and creates the NNRt device information based on the information about the first found NNRt device. + +**Parameters** + +| Name| Description | +| ---- | ---------------- | +| name | Name of the NNRt device.| + +**Returns** + +[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance. + +**Since** + +10 + + +### OH_AI_CreateNNRTDeviceInfoByType() + + +``` +OH_AI_API OH_AI_DeviceInfoHandle OH_AI_CreateNNRTDeviceInfoByType (OH_AI_NNRTDeviceType type) +``` + +**Description** + +Searches for the NNRt device with the specified type and creates the NNRt device information based on the information about the first found NNRt device. + +**Parameters** + +| Name| Description | +| ---- | ------------------------------------------------------------ | +| type | NNRt device type, which is specified by [OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype).| + +**Returns** + +[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance. + +**Since** + +10 + + +### OH_AI_DestroyAllNNRTDeviceDescs() + + +``` +OH_AI_API void OH_AI_DestroyAllNNRTDeviceDescs (NNRTDeviceDesc ** desc) +``` + +**Description** + +Destroys the array of NNRT descriptions obtained by [OH_AI_GetAllNNRTDeviceDescs](#oh_ai_getallnnrtdevicedescs). + +**Parameters** -| Name | Description | -| -------- | -------- | -| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance. | -| thread_num | Number of runtime threads. | +| Name| Description | +| ---- | ------------------------------------------------------------ | +| desc | Double pointer to the array of the NNRt device descriptions. After the operation is complete, the content pointed to by **desc** is set to **NULL**.| + +**Since** + +10 ### OH_AI_DeviceInfoCreate() - + ``` OH_AI_API OH_AI_DeviceInfoHandle OH_AI_DeviceInfoCreate (OH_AI_DeviceType device_type) ``` -**Description**
+ +**Description** + Creates a device information object. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| device_type | Device type. For details, see [OH_AI_DeviceType](#oh_ai_devicetype). | +| Name | Description | +| ----------- | ------------------------------------------------------- | +| device_type | Device type, which is specified by [OH_AI_DeviceType](#oh_ai_devicetype).| **Returns** -[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to the device information instance. +[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance. ### OH_AI_DeviceInfoDestroy() - + ``` OH_AI_API void OH_AI_DeviceInfoDestroy (OH_AI_DeviceInfoHandle * device_info) ``` -**Description**
-Destroys a device information instance. - **Parameters** +**Description** + +Destroys a device information object. Note: After the device information instance is added to the context, the caller does not need to destroy it manually. + +**Parameters** -| Name | Description | -| -------- | -------- | -| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance. | +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.| + + +### OH_AI_DeviceInfoGetDeviceId() + + +``` +OH_AI_API size_t OH_AI_DeviceInfoGetDeviceId (const OH_AI_DeviceInfoHandle device_info) +``` + +**Description** + +Obtains the ID of an NNRt device. This API is available only for NNRt devices. + +**Parameters** + +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.| + +**Returns** + +NNRt device ID. + +**Since** + +10 ### OH_AI_DeviceInfoGetDeviceType() - + ``` OH_AI_API OH_AI_DeviceType OH_AI_DeviceInfoGetDeviceType (const OH_AI_DeviceInfoHandle device_info) ``` -**Description**
-Obtains the type of a provider device. - **Parameters** +**Description** + +Obtains the device type. -| Name | Description | -| -------- | -------- | -| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance. | +**Parameters** + +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.| **Returns** @@ -676,18 +996,20 @@ Type of the provider device. ### OH_AI_DeviceInfoGetEnableFP16() - + ``` OH_AI_API bool OH_AI_DeviceInfoGetEnableFP16 (const OH_AI_DeviceInfoHandle device_info) ``` -**Description**
-Checks whether float16 inference is enabled. This function is available only for CPU/GPU devices. - **Parameters** +**Description** + +Checks whether float16 inference is enabled. This function is available only for CPU and GPU devices. + +**Parameters** -| Name | Description | -| -------- | -------- | -| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance. | +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.| **Returns** @@ -696,38 +1018,94 @@ Whether float16 inference is enabled. ### OH_AI_DeviceInfoGetFrequency() - + ``` OH_AI_API int OH_AI_DeviceInfoGetFrequency (const OH_AI_DeviceInfoHandle device_info) ``` -**Description**
-Obtains the NPU frequency type. This function is available only for NPU devices. - **Parameters** +**Description** + +Obtains the NPU frequency type. This API is available only for NPU devices. + +**Parameters** + +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.| + +**Returns** + +NPU frequency type. The value ranges from **0** to **4**. **1**: low power consumption; **2**: balanced; **3**: high performance; **4**: ultra-high performance + + +### OH_AI_DeviceInfoGetPerformanceMode() + + +``` +OH_AI_API OH_AI_PerformanceMode OH_AI_DeviceInfoGetPerformanceMode (const OH_AI_DeviceInfoHandle device_info) +``` + +**Description** + +Obtains the performance mode of an NNRt device. This API is available only for NNRt devices. + +**Parameters** + +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.| + +**Returns** + +NNRt performance mode, which is specified by [OH_AI_PerformanceMode](#oh_ai_performancemode). + +**Since** + +10 + + +### OH_AI_DeviceInfoGetPriority() + + +``` +OH_AI_API OH_AI_Priority OH_AI_DeviceInfoGetPriority (const OH_AI_DeviceInfoHandle device_info) +``` + +**Description** -| Name | Description | -| -------- | -------- | -| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance. | +Obtains the priority of an NNRT task. This API is available only for NNRt devices. + +**Parameters** + +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.| **Returns** -Frequency type of the NPU. The value ranges from **0** to **4**. **1**: low power consumption; **2**: balanced; **3**: high performance; **4**: ultra-high performance +NNRt task priority, which is specified by [OH_AI_Priority](#oh_ai_priority). + +**Since** + +10 ### OH_AI_DeviceInfoGetProvider() - + ``` OH_AI_API const char* OH_AI_DeviceInfoGetProvider (const OH_AI_DeviceInfoHandle device_info) ``` -**Description**
+ +**Description** + Obtains the provider name. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance. | +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.| **Returns** @@ -736,150 +1114,339 @@ Provider name. ### OH_AI_DeviceInfoGetProviderDevice() - + ``` OH_AI_API const char* OH_AI_DeviceInfoGetProviderDevice (const OH_AI_DeviceInfoHandle device_info) ``` -**Description**
+ +**Description** + Obtains the name of a provider device. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance. | +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.| **Returns** Name of the provider device. +### OH_AI_DeviceInfoSetDeviceId() + + +``` +OH_AI_API void OH_AI_DeviceInfoSetDeviceId (OH_AI_DeviceInfoHandle device_info, size_t device_id ) +``` + +**Description** + +Sets the ID of an NNRt device. This API is available only for NNRt devices. + +**Parameters** + +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.| +| device_id | NNRt device ID. | + +**Since** + +10 + + ### OH_AI_DeviceInfoSetEnableFP16() - + ``` OH_AI_API void OH_AI_DeviceInfoSetEnableFP16 (OH_AI_DeviceInfoHandle device_info, bool is_fp16 ) ``` -**Description**
-Sets whether to enable float16 inference. This function is available only for CPU/GPU devices. - **Parameters** +**Description** + +Sets whether to enable float16 inference. This function is available only for CPU and GPU devices. + +**Parameters** -| Name | Description | -| -------- | -------- | -| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance. | -| is_fp16 | Whether to enable float16 inference. | +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.| +| is_fp16 | Whether to enable the float16 inference mode. | ### OH_AI_DeviceInfoSetFrequency() - + ``` OH_AI_API void OH_AI_DeviceInfoSetFrequency (OH_AI_DeviceInfoHandle device_info, int frequency ) ``` -**Description**
+ +**Description** + Sets the NPU frequency type. This function is available only for NPU devices. - **Parameters** +**Parameters** + +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.| +| frequency | NPU frequency type. The value ranges from **0** to **4**. The default value is **3**. **1**: low power consumption; **2**: balanced; **3**: high performance; **4**: ultra-high performance| + + +### OH_AI_DeviceInfoSetPerformanceMode() + + +``` +OH_AI_API void OH_AI_DeviceInfoSetPerformanceMode (OH_AI_DeviceInfoHandle device_info, OH_AI_PerformanceMode mode ) +``` + +**Description** + +Sets the performance mode of an NNRt device. This API is available only for NNRt devices. + +**Parameters** + +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.| +| mode | NNRt performance mode, which is specified by [OH_AI_PerformanceMode](#oh_ai_performancemode).| + +**Since** -| Name | Description | -| -------- | -------- | -| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance. | -| frequency | NPU frequency type. The value ranges from **0** to **4**. The default value is **3**. **1**: low power consumption; **2**: balanced; **3**: high performance; **4**: ultra-high performance | +10 + + +### OH_AI_DeviceInfoSetPriority() + + +``` +OH_AI_API void OH_AI_DeviceInfoSetPriority (OH_AI_DeviceInfoHandle device_info, OH_AI_Priority priority ) +``` + +**Description** + +Sets the priority of an NNRt task. This API is available only for NNRt devices. + +**Parameters** + +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.| +| priority | NNRt task priority, which is specified by [OH_AI_Priority](#oh_ai_priority). | + +**Since** + +10 ### OH_AI_DeviceInfoSetProvider() - + ``` OH_AI_API void OH_AI_DeviceInfoSetProvider (OH_AI_DeviceInfoHandle device_info, const char * provider ) ``` -**Description**
-Sets the name of a provider. - **Parameters** +**Description** + +Sets the provider name. -| Name | Description | -| -------- | -------- | -| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance. | -| provider | Provider name. | +**Parameters** + +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.| +| provider | Provider name. | ### OH_AI_DeviceInfoSetProviderDevice() - + ``` OH_AI_API void OH_AI_DeviceInfoSetProviderDevice (OH_AI_DeviceInfoHandle device_info, const char * device ) ``` -**Description**
+ +**Description** + Sets the name of a provider device. - **Parameters** +**Parameters** + +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.| +| device | Name of the provider device, for example, CPU. | + + +### OH_AI_GetAllNNRTDeviceDescs() + + +``` +OH_AI_API NNRTDeviceDesc* OH_AI_GetAllNNRTDeviceDescs (size_t * num) +``` + +**Description** + +Obtains the descriptions of all NNRt devices in the system. + +**Parameters** + +| Name| Description | +| ---- | ------------------------ | +| num | Number of NNRt devices.| + +**Returns** + +Pointer to the array of the NNRt device descriptions. If the operation fails, **NULL** is returned. + +**Since** + +10 + + +### OH_AI_GetDeviceIdFromNNRTDeviceDesc() + + +``` +OH_AI_API size_t OH_AI_GetDeviceIdFromNNRTDeviceDesc (const NNRTDeviceDesc * desc) +``` + +**Description** + +Obtains the NNRt device ID from the specified NNRt device description. Note that this ID is valid only for NNRt devices. + +**Parameters** -| Name | Description | -| -------- | -------- | -| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance. | -| device | Name of the provider device, for example, CPU. | +| Name| Description | +| ---- | -------------------------------- | +| desc | Pointer to the NNRt device description.| + +**Returns** + +NNRt device ID. + +**Since** + +10 + + +### OH_AI_GetNameFromNNRTDeviceDesc() + + +``` +OH_AI_API const char* OH_AI_GetNameFromNNRTDeviceDesc (const NNRTDeviceDesc * desc) +``` + +**Description** + +Obtains the NNRt device name from the specified NNRt device description. + +**Parameters** + +| Name| Description | +| ---- | -------------------------------- | +| desc | Pointer to the NNRt device description.| + +**Returns** + +NNRt device name. The value is a pointer that points to a constant string, which is held by **desc**. The caller does not need to destroy it separately. + +**Since** + +10 + + +### OH_AI_GetTypeFromNNRTDeviceDesc() + + +``` +OH_AI_API OH_AI_NNRTDeviceType OH_AI_GetTypeFromNNRTDeviceDesc (const NNRTDeviceDesc * desc) +``` + +**Description** + +Obtains the NNRt device type from the specified NNRt device description. + +**Parameters** + +| Name| Description | +| ---- | -------------------------------- | +| desc | Pointer to the NNRt device description.| + +**Returns** + +NNRt device type, which is specified by [OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype). + +**Since** + +10 ### OH_AI_ModelBuild() - + ``` OH_AI_API OH_AI_Status OH_AI_ModelBuild (OH_AI_ModelHandle model, const void * model_data, size_t data_size, OH_AI_ModelType model_type, const OH_AI_ContextHandle model_context ) ``` -**Description**
+ +**Description** + Loads and builds a MindSpore model from the memory buffer. Note that the same {\@Link OH_AI_ContextHandle} object can only be passed to {\@Link OH_AI_ModelBuild} or {\@Link OH_AI_ModelBuildFromFile} once. If you call this function multiple times, make sure that you create multiple {\@Link OH_AI_ContextHandle} objects accordingly. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| model | Pointer to the model object. | -| model_data | Address of the loaded model data in the memory. | -| data_size | Length of the model data. | -| model_type | Type of the model file. For details, see [OH_AI_ModelType](#oh_ai_modeltype). | -| model_context | Context for model running. For details, see [OH_AI_ContextHandle](#oh_ai_contexthandle). | +| Name | Description | +| ------------- | ------------------------------------------------------------ | +| model | Pointer to the model object. | +| model_data | Address of the loaded model data in the memory. | +| data_size | Length of the model data. | +| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype). | +| model_context | Model runtime context, which is specified by [OH_AI_ContextHandle](#oh_ai_contexthandle).| **Returns** -Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **MSStatus::kMSStatusSuccess** indicates that the operation is successful. +Status code enumerated by [OH_AI_Status](#oh_ai_status-1). The value **MSStatus::kMSStatusSuccess** indicates that the operation is successful. ### OH_AI_ModelBuildFromFile() - + ``` OH_AI_API OH_AI_Status OH_AI_ModelBuildFromFile (OH_AI_ModelHandle model, const char * model_path, OH_AI_ModelType model_type, const OH_AI_ContextHandle model_context ) ``` -**Description**
+ +**Description** + Loads and builds a MindSpore model from a model file. Note that the same {\@Link OH_AI_ContextHandle} object can only be passed to {\@Link OH_AI_ModelBuild} or {\@Link OH_AI_ModelBuildFromFile} once. If you call this function multiple times, make sure that you create multiple {\@Link OH_AI_ContextHandle} objects accordingly. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| model | Pointer to the model object. | -| model_path | Path of the model file. | -| model_type | Type of the model file. For details, see [OH_AI_ModelType](#oh_ai_modeltype). | -| model_context | Context for model running. For details, see [OH_AI_ContextHandle](#oh_ai_contexthandle). | +| Name | Description | +| ------------- | ------------------------------------------------------------ | +| model | Pointer to the model object. | +| model_path | Path of the model file. | +| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype). | +| model_context | Model runtime context, which is specified by [OH_AI_ContextHandle](#oh_ai_contexthandle).| **Returns** -Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **MSStatus::kMSStatusSuccess** indicates that the operation is successful. +Status code enumerated by [OH_AI_Status](#oh_ai_status-1). The value **MSStatus::kMSStatusSuccess** indicates that the operation is successful. ### OH_AI_ModelCreate() - + ``` OH_AI_API OH_AI_ModelHandle OH_AI_ModelCreate () ``` -**Description**
+ +**Description** + Creates a model object. **Returns** @@ -889,35 +1456,39 @@ Pointer to the model object. ### OH_AI_ModelDestroy() - + ``` OH_AI_API void OH_AI_ModelDestroy (OH_AI_ModelHandle * model) ``` -**Description**
+ +**Description** + Destroys a model object. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| model | Pointer to the model object. | +| Name | Description | +| ----- | -------------- | +| model | Pointer to the model object.| ### OH_AI_ModelGetInputByTensorName() - + ``` OH_AI_API OH_AI_TensorHandle OH_AI_ModelGetInputByTensorName (const OH_AI_ModelHandle model, const char * tensor_name ) ``` -**Description**
+ +**Description** + Obtains the input tensor of a model by tensor name. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| model | Pointer to the model object. | -| tensor_name | Tensor name. | +| Name | Description | +| ----------- | -------------- | +| model | Pointer to the model object.| +| tensor_name | Tensor name. | **Returns** @@ -926,18 +1497,20 @@ Pointer to the input tensor indicated by **tensor_name**. If the tensor does not ### OH_AI_ModelGetInputs() - + ``` OH_AI_API OH_AI_TensorHandleArray OH_AI_ModelGetInputs (const OH_AI_ModelHandle model) ``` -**Description**
+ +**Description** + Obtains the input tensor array structure of a model. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| model | Pointer to the model object. | +| Name | Description | +| ----- | -------------- | +| model | Pointer to the model object.| **Returns** @@ -946,39 +1519,43 @@ Tensor array structure corresponding to the model input. ### OH_AI_ModelGetOutputByTensorName() - + ``` OH_AI_API OH_AI_TensorHandle OH_AI_ModelGetOutputByTensorName (const OH_AI_ModelHandle model, const char * tensor_name ) ``` -**Description**
+ +**Description** + Obtains the output tensor of a model by tensor name. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| model | Pointer to the model object. | -| tensor_name | Tensor name. | +| Name | Description | +| ----------- | -------------- | +| model | Pointer to the model object.| +| tensor_name | Tensor name. | **Returns** -Pointer to the output tensor indicated by **tensor_name**. If the tensor does not exist in the input, **null** will be returned. +Pointer to the input tensor indicated by **tensor_name**. If the tensor does not exist, **null** will be returned. ### OH_AI_ModelGetOutputs() - + ``` OH_AI_API OH_AI_TensorHandleArray OH_AI_ModelGetOutputs (const OH_AI_ModelHandle model) ``` -**Description**
+ +**Description** + Obtains the output tensor array structure of a model. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| model | Pointer to the model object. | +| Name | Description | +| ----- | -------------- | +| model | Pointer to the model object.| **Returns** @@ -987,126 +1564,138 @@ Tensor array structure corresponding to the model output. ### OH_AI_ModelPredict() - + ``` OH_AI_API OH_AI_Status OH_AI_ModelPredict (OH_AI_ModelHandle model, const OH_AI_TensorHandleArray inputs, OH_AI_TensorHandleArray * outputs, const OH_AI_KernelCallBack before, const OH_AI_KernelCallBack after ) ``` -**Description**
+ +**Description** + Performs model inference. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| model | Pointer to the model object. | -| inputs | Tensor array structure corresponding to the model input. | -| outputs | Pointer to the tensor array structure corresponding to the model output. | -| before | Callback function executed before model inference. | -| after | Callback function executed after model inference. | +| Name | Description | +| ------- | ------------------------------------ | +| model | Pointer to the model object. | +| inputs | Tensor array structure corresponding to the model input. | +| outputs | Pointer to the tensor array structure corresponding to the model output.| +| before | Callback function executed before model inference. | +| after | Callback function executed after model inference. | **Returns** -Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **MSStatus::kMSStatusSuccess** indicates that the operation is successful. +Status code enumerated by [OH_AI_Status](#oh_ai_status-1). The value **MSStatus::kMSStatusSuccess** indicates that the operation is successful. ### OH_AI_ModelResize() - + ``` OH_AI_API OH_AI_Status OH_AI_ModelResize (OH_AI_ModelHandle model, const OH_AI_TensorHandleArray inputs, OH_AI_ShapeInfo * shape_infos, size_t shape_info_num ) ``` -**Description**
+ +**Description** + Adjusts the input tensor shapes of a built model. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| model | Pointer to the model object. | -| inputs | Tensor array structure corresponding to the model input. | -| shape_infos | Input shape array, which consists of tensor shapes arranged in the model input sequence. The model adjusts the tensor shapes in sequence. | -| shape_info_num | Length of the input shape array. | +| Name | Description | +| -------------- | ------------------------------------------------------------ | +| model | Pointer to the model object. | +| inputs | Tensor array structure corresponding to the model input. | +| shape_infos | Input shape information array, which consists of tensor shapes arranged in the model input sequence. The model adjusts the tensor shapes in sequence.| +| shape_info_num | Length of the shape information array. | **Returns** -Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **MSStatus::kMSStatusSuccess** indicates that the operation is successful. +Status code enumerated by [OH_AI_Status](#oh_ai_status-1). The value **MSStatus::kMSStatusSuccess** indicates that the operation is successful. ### OH_AI_TensorClone() - + ``` OH_AI_API OH_AI_TensorHandle OH_AI_TensorClone (OH_AI_TensorHandle tensor) ``` -**Description**
+ +**Description** + Clones a tensor. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| tensor | Pointer to the tensor to clone. | +| Name | Description | +| ------ | ------------------ | +| tensor | Pointer to the tensor to clone.| **Returns** -Handle of the new tensor object. +Defines the handle of a tensor object. ### OH_AI_TensorCreate() - + ``` OH_AI_API OH_AI_TensorHandle OH_AI_TensorCreate (const char * name, OH_AI_DataType type, const int64_t * shape, size_t shape_num, const void * data, size_t data_len ) ``` -**Description**
+ +**Description** + Creates a tensor object. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| name | Tensor name. | -| type | Tensor data type. | -| shape | Tensor dimension array. | -| shape_num | Length of the tensor dimension array. | -| data | Data pointer. | -| data_len | Data length. | +| Name | Description | +| --------- | ------------------ | +| name | Tensor name. | +| type | Tensor data type. | +| shape | Tensor dimension array. | +| shape_num | Length of the tensor dimension array.| +| data | Data pointer. | +| data_len | Data length. | **Returns** -Handle of the tensor object. +Defines the handle of a tensor object. ### OH_AI_TensorDestroy() - + ``` OH_AI_API void OH_AI_TensorDestroy (OH_AI_TensorHandle * tensor) ``` -**Description**
+ +**Description** + Destroys a tensor object. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| tensor | Level-2 pointer to the tensor handle. | +| Name | Description | +| ------ | ------------------------ | +| tensor | Level-2 pointer to the tensor handle.| ### OH_AI_TensorGetData() - + ``` OH_AI_API const void* OH_AI_TensorGetData (const OH_AI_TensorHandle tensor) ``` -**Description**
+ +**Description** + Obtains the pointer to tensor data. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| tensor | Handle of the tensor object. | +| Name | Description | +| ------ | -------------- | +| tensor | Handle of the tensor object.| **Returns** @@ -1115,18 +1704,20 @@ Pointer to tensor data. ### OH_AI_TensorGetDataSize() - + ``` OH_AI_API size_t OH_AI_TensorGetDataSize (const OH_AI_TensorHandle tensor) ``` -**Description**
+ +**Description** + Obtains the number of bytes of the tensor data. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| tensor | Handle of the tensor object. | +| Name | Description | +| ------ | -------------- | +| tensor | Handle of the tensor object.| **Returns** @@ -1135,38 +1726,42 @@ Number of bytes of the tensor data. ### OH_AI_TensorGetDataType() - + ``` OH_AI_API OH_AI_DataType OH_AI_TensorGetDataType (const OH_AI_TensorHandle tensor) ``` -**Description**
-Obtains the data type of a tensor. - **Parameters** +**Description** + +Obtains the tensor type. -| Name | Description | -| -------- | -------- | -| tensor | Handle of the tensor object. | +**Parameters** + +| Name | Description | +| ------ | -------------- | +| tensor | Handle of the tensor object.| **Returns** -Data type of the tensor. +Tensor data type. ### OH_AI_TensorGetElementNum() - + ``` OH_AI_API int64_t OH_AI_TensorGetElementNum (const OH_AI_TensorHandle tensor) ``` -**Description**
+ +**Description** + Obtains the number of tensor elements. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| tensor | Handle of the tensor object. | +| Name | Description | +| ------ | -------------- | +| tensor | Handle of the tensor object.| **Returns** @@ -1175,18 +1770,20 @@ Number of tensor elements. ### OH_AI_TensorGetFormat() - + ``` OH_AI_API OH_AI_Format OH_AI_TensorGetFormat (const OH_AI_TensorHandle tensor) ``` -**Description**
+ +**Description** + Obtains the tensor data format. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| tensor | Handle of the tensor object. | +| Name | Description | +| ------ | -------------- | +| tensor | Handle of the tensor object.| **Returns** @@ -1195,38 +1792,42 @@ Tensor data format. ### OH_AI_TensorGetMutableData() - + ``` OH_AI_API void* OH_AI_TensorGetMutableData (const OH_AI_TensorHandle tensor) ``` -**Description**
+ +**Description** + Obtains the pointer to variable tensor data. If the data is empty, memory will be allocated. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| tensor | Handle of the tensor object. | +| Name | Description | +| ------ | -------------- | +| tensor | Handle of the tensor object.| **Returns** -Pointer to variable tensor data. +Pointer to tensor data. ### OH_AI_TensorGetName() - + ``` OH_AI_API const char* OH_AI_TensorGetName (const OH_AI_TensorHandle tensor) ``` -**Description**
+ +**Description** + Obtains the name of a tensor. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| tensor | Handle of the tensor object. | +| Name | Description | +| ------ | -------------- | +| tensor | Handle of the tensor object.| **Returns** @@ -1235,106 +1836,118 @@ Tensor name. ### OH_AI_TensorGetShape() - + ``` OH_AI_API const int64_t* OH_AI_TensorGetShape (const OH_AI_TensorHandle tensor, size_t * shape_num ) ``` -**Description**
-Obtains the shape of a tensor. - **Parameters** +**Description** -| Name | Description | -| -------- | -------- | -| tensor | Handle of the tensor object. | -| shape_num | Length of the tensor shape array. | +Obtains the tensor shape. + +**Parameters** + +| Name | Description | +| --------- | ---------------------------------------------- | +| tensor | Handle of the tensor object. | +| shape_num | Length of the tensor shape array.| **Returns** -Tensor shape array. +Shape array. ### OH_AI_TensorSetData() - + ``` OH_AI_API void OH_AI_TensorSetData (OH_AI_TensorHandle tensor, void * data ) ``` -**Description**
+ +**Description** + Sets the tensor data. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| tensor | Handle of the tensor object. | -| data | Data pointer. | +| Name | Description | +| ------ | ---------------- | +| tensor | Handle of the tensor object. | +| data | Data pointer.| ### OH_AI_TensorSetDataType() - + ``` OH_AI_API void OH_AI_TensorSetDataType (OH_AI_TensorHandle tensor, OH_AI_DataType type ) ``` -**Description**
-Sets the data type of a tensor. - **Parameters** +**Description** + +Sets the tensor data type. -| Name | Description | -| -------- | -------- | -| tensor | Handle of the tensor object. | -| type | Data type. For details, see [OH_AI_DataType](#oh_ai_datatype). | +**Parameters** + +| Name | Description | +| ------ | --------------------------------------------------- | +| tensor | Handle of the tensor object. | +| type | Data type, which is specified by [OH_AI_DataType](#oh_ai_datatype).| ### OH_AI_TensorSetFormat() - + ``` OH_AI_API void OH_AI_TensorSetFormat (OH_AI_TensorHandle tensor, OH_AI_Format format ) ``` -**Description**
+ +**Description** + Sets the tensor data format. - **Parameters** +**Parameters** -| Name | Description | -| -------- | -------- | -| tensor | Handle of the tensor object. | -| format | Tensor data format. | +| Name | Description | +| ------ | ------------------ | +| tensor | Handle of the tensor object. | +| format | Tensor data format.| ### OH_AI_TensorSetName() - + ``` -OH_AI_API void OH_AI_TensorSetName (OH_AI_TensorHandle tensor, const char * name ) +OH_AI_API void OH_AI_TensorSetName (OH_AI_TensorHandle tensor, const char *name ) ``` -**Description**
-Sets the name of a tensor. - **Parameters** +**Description** + +Sets the tensor name. + +**Parameters** -| Name | Description | -| -------- | -------- | -| tensor | Handle of the tensor object. | -| name | Tensor name. | +| Name | Description | +| ------ | -------------- | +| tensor | Handle of the tensor object.| +| name | Tensor name. | ### OH_AI_TensorSetShape() - + ``` OH_AI_API void OH_AI_TensorSetShape (OH_AI_TensorHandle tensor, const int64_t * shape, size_t shape_num ) ``` -**Description**
-Sets the shape of a tensor. - **Parameters** +**Description** + +Sets the tensor shape. + +**Parameters** -| Name | Description | -| -------- | -------- | -| tensor | Handle of the tensor object. | -| shape | Tensor shape array. | -| shape_num | Length of the tensor shape array. | +| Name | Description | +| --------- | ------------------ | +| tensor | Handle of the tensor object. | +| shape | Shape array. | +| shape_num | Length of the tensor shape array.| diff --git a/en/application-dev/reference/native-apis/context_8h.md b/en/application-dev/reference/native-apis/context_8h.md index f73e70d24871b35a026b39befa08fbebeee5380a..743fdbc9dbfe3d3d4b3b464b0301999631980790 100644 --- a/en/application-dev/reference/native-apis/context_8h.md +++ b/en/application-dev/reference/native-apis/context_8h.md @@ -5,10 +5,11 @@ Provides **Context** APIs for configuring runtime information. -**Since:** +**Since** + 9 -**Related Modules:** +**Related Modules** [MindSpore](_mind_spore.md) @@ -18,35 +19,48 @@ Provides **Context** APIs for configuring runtime information. ### Types -| Name | Description | +| Name| Description| | -------- | -------- | -| [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) | Defines the pointer to the MindSpore context. | -| [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) | Defines the pointer to the MindSpore device information. | +| [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) | Defines the pointer to the MindSpore context. | +| [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) | Defines the pointer to the MindSpore device information.| ### Functions -| Name | Description | +| Name| Description| | -------- | -------- | -| [OH_AI_ContextCreate](_mind_spore.md#oh_ai_contextcreate) () | Creates a context object. | -| [OH_AI_ContextDestroy](_mind_spore.md#oh_ai_contextdestroy) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) \*context) | Destroys a context object. | -| [OH_AI_ContextSetThreadNum](_mind_spore.md#oh_ai_contextsetthreadnum) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, int32_t thread_num) | Sets the number of runtime threads. | -| [OH_AI_ContextGetThreadNum](_mind_spore.md#oh_ai_contextgetthreadnum) (const [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Obtains the number of threads. | -| [OH_AI_ContextSetThreadAffinityMode](_mind_spore.md#oh_ai_contextsetthreadaffinitymode) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, int mode) | Sets the affinity mode for binding runtime threads to CPU cores, which are categorized into little cores and big cores depending on the CPU frequency. | -| [OH_AI_ContextGetThreadAffinityMode](_mind_spore.md#oh_ai_contextgetthreadaffinitymode) (const [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Obtains the affinity mode for binding runtime threads to CPU cores. | -| [OH_AI_ContextSetThreadAffinityCoreList](_mind_spore.md#oh_ai_contextsetthreadaffinitycorelist) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, const int32_t \*core_list, size_t core_num) | Sets the list of CPU cores bound to a runtime thread. | -| [OH_AI_ContextGetThreadAffinityCoreList](_mind_spore.md#oh_ai_contextgetthreadaffinitycorelist) (const [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, size_t \*core_num) | Obtains the list of bound CPU cores. | -| [OH_AI_ContextSetEnableParallel](_mind_spore.md#oh_ai_contextsetenableparallel) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, bool is_parallel) | Sets whether to enable parallelism between operators. | -| [OH_AI_ContextGetEnableParallel](_mind_spore.md#oh_ai_contextgetenableparallel) (const [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Checks whether parallelism between operators is supported. | -| [OH_AI_ContextAddDeviceInfo](_mind_spore.md#oh_ai_contextadddeviceinfo) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Adds information about a running device. | -| [OH_AI_DeviceInfoCreate](_mind_spore.md#oh_ai_deviceinfocreate) ([OH_AI_DeviceType](_mind_spore.md#oh_ai_devicetype) device_type) | Creates a device information object. | -| [OH_AI_DeviceInfoDestroy](_mind_spore.md#oh_ai_deviceinfodestroy) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) \*device_info) | Destroys a device information instance. | -| [OH_AI_DeviceInfoSetProvider](_mind_spore.md#oh_ai_deviceinfosetprovider) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, const char \*provider) | Sets the name of a provider. | -| [OH_AI_DeviceInfoGetProvider](_mind_spore.md#oh_ai_deviceinfogetprovider) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the provider name. | -| [OH_AI_DeviceInfoSetProviderDevice](_mind_spore.md#oh_ai_deviceinfosetproviderdevice) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, const char \*device) | Sets the name of a provider device. | -| [OH_AI_DeviceInfoGetProviderDevice](_mind_spore.md#oh_ai_deviceinfogetproviderdevice) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the name of a provider device. | -| [OH_AI_DeviceInfoGetDeviceType](_mind_spore.md#oh_ai_deviceinfogetdevicetype) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the type of a provider device. | -| [OH_AI_DeviceInfoSetEnableFP16](_mind_spore.md#oh_ai_deviceinfosetenablefp16) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, bool is_fp16) | Sets whether to enable float16 inference. This function is available only for CPU/GPU devices. | -| [OH_AI_DeviceInfoGetEnableFP16](_mind_spore.md#oh_ai_deviceinfogetenablefp16) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Checks whether float16 inference is enabled. This function is available only for CPU/GPU devices. | -| [OH_AI_DeviceInfoSetFrequency](_mind_spore.md#oh_ai_deviceinfosetfrequency) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, int frequency) | Sets the NPU frequency type. This function is available only for NPU devices. | -| [OH_AI_DeviceInfoGetFrequency](_mind_spore.md#oh_ai_deviceinfogetfrequency) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the NPU frequency type. This function is available only for NPU devices. | +| [OH_AI_ContextCreate](_mind_spore.md#oh_ai_contextcreate) () | Creates a context object.| +| [OH_AI_ContextDestroy](_mind_spore.md#oh_ai_contextdestroy) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) \*context) | Destroys a context object.| +| [OH_AI_ContextSetThreadNum](_mind_spore.md#oh_ai_contextsetthreadnum) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, int32_t thread_num) | Sets the number of runtime threads.| +| [OH_AI_ContextGetThreadNum](_mind_spore.md#oh_ai_contextgetthreadnum) (const [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Obtains the number of threads.| +| [OH_AI_ContextSetThreadAffinityMode](_mind_spore.md#oh_ai_contextsetthreadaffinitymode) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, int mode) | Sets the affinity mode for binding runtime threads to CPU cores, which are classified into large, medium, and small cores based on the CPU frequency. You only need to bind the large or medium cores, but not small cores.| +| [OH_AI_ContextGetThreadAffinityMode](_mind_spore.md#oh_ai_contextgetthreadaffinitymode) (const [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Obtains the affinity mode for binding runtime threads to CPU cores.| +| [OH_AI_ContextSetThreadAffinityCoreList](_mind_spore.md#oh_ai_contextsetthreadaffinitycorelist) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, const int32_t \*core_list, size_t core_num) | Sets the list of CPU cores bound to a runtime thread.| +| [OH_AI_ContextGetThreadAffinityCoreList](_mind_spore.md#oh_ai_contextgetthreadaffinitycorelist) (const [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, size_t \*core_num) | Obtains the list of bound CPU cores.| +| [OH_AI_ContextSetEnableParallel](_mind_spore.md#oh_ai_contextsetenableparallel) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, bool is_parallel) | Sets whether to enable parallelism between operators. The setting is ineffective because the feature of this API is not yet available.| +| [OH_AI_ContextGetEnableParallel](_mind_spore.md#oh_ai_contextgetenableparallel) (const [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Checks whether parallelism between operators is supported.| +| [OH_AI_ContextAddDeviceInfo](_mind_spore.md#oh_ai_contextadddeviceinfo) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Attaches the custom device information to the inference context.| +| [OH_AI_DeviceInfoCreate](_mind_spore.md#oh_ai_deviceinfocreate) ([OH_AI_DeviceType](_mind_spore.md#oh_ai_devicetype) device_type) | Creates a device information object.| +| [OH_AI_DeviceInfoDestroy](_mind_spore.md#oh_ai_deviceinfodestroy) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) \*device_info) | Destroys a device information object. Note: After the device information instance is added to the context, the caller does not need to destroy it manually.| +| [OH_AI_DeviceInfoSetProvider](_mind_spore.md#oh_ai_deviceinfosetprovider) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, const char \*provider) | Sets the provider name.| +| [OH_AI_DeviceInfoGetProvider](_mind_spore.md#oh_ai_deviceinfogetprovider) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the provider name.| +| [OH_AI_DeviceInfoSetProviderDevice](_mind_spore.md#oh_ai_deviceinfosetproviderdevice) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, const char \*device) | Sets the name of a provider device.| +| [OH_AI_DeviceInfoGetProviderDevice](_mind_spore.md#oh_ai_deviceinfogetproviderdevice) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the name of a provider device.| +| [OH_AI_DeviceInfoGetDeviceType](_mind_spore.md#oh_ai_deviceinfogetdevicetype) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the device type.| +| [OH_AI_DeviceInfoSetEnableFP16](_mind_spore.md#oh_ai_deviceinfosetenablefp16) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, bool is_fp16) | Sets whether to enable float16 inference. This function is available only for CPU and GPU devices.| +| [OH_AI_DeviceInfoGetEnableFP16](_mind_spore.md#oh_ai_deviceinfogetenablefp16) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Checks whether float16 inference is enabled. This function is available only for CPU and GPU devices.| +| [OH_AI_DeviceInfoSetFrequency](_mind_spore.md#oh_ai_deviceinfosetfrequency) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, int frequency) | Sets the NPU frequency type. This function is available only for NPU devices.| +| [OH_AI_DeviceInfoGetFrequency](_mind_spore.md#oh_ai_deviceinfogetfrequency) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the NPU frequency type. This function is available only for NPU devices.| +| [OH_AI_GetAllNNRTDeviceDescs](_mind_spore.md#oh_ai_getallnnrtdevicedescs) (size_t \*num) | Obtains the descriptions of all NNRt devices in the system.| +| [OH_AI_DestroyAllNNRTDeviceDescs](_mind_spore.md#oh_ai_destroyallnnrtdevicedescs) ([NNRTDeviceDesc](_mind_spore.md#nnrtdevicedesc) \*\*desc) | Destroys the array of NNRT descriptions obtained by [OH_AI_GetAllNNRTDeviceDescs](_mind_spore.md#oh_ai_getallnnrtdevicedescs).| +| [OH_AI_GetDeviceIdFromNNRTDeviceDesc](_mind_spore.md#oh_ai_getdeviceidfromnnrtdevicedesc) (const [NNRTDeviceDesc](_mind_spore.md#nnrtdevicedesc) \*desc) | Obtains the NNRt device ID from the specified NNRt device description. Note that this ID is valid only for NNRt devices.| +| [OH_AI_GetNameFromNNRTDeviceDesc](_mind_spore.md#oh_ai_getnamefromnnrtdevicedesc) (const [NNRTDeviceDesc](_mind_spore.md#nnrtdevicedesc) \*desc) | Obtains the NNRt device name from the specified NNRt device description.| +| [OH_AI_GetTypeFromNNRTDeviceDesc](_mind_spore.md#oh_ai_gettypefromnnrtdevicedesc) (const [NNRTDeviceDesc](_mind_spore.md#nnrtdevicedesc) \*desc) | Obtains the NNRt device type from the specified NNRt device description.| +| [OH_AI_CreateNNRTDeviceInfoByName](_mind_spore.md#oh_ai_creatennrtdeviceinfobyname) (const char \*name) | Searches for the NNRt device with the specified name and creates the NNRt device information based on the information about the first found NNRt device.| +| [OH_AI_CreateNNRTDeviceInfoByType](_mind_spore.md#oh_ai_creatennrtdeviceinfobytype) ([OH_AI_NNRTDeviceType](_mind_spore.md#oh_ai_nnrtdevicetype) type) | Searches for the NNRt device with the specified type and creates the NNRt device information based on the information about the first found NNRt device.| +| [OH_AI_DeviceInfoSetDeviceId](_mind_spore.md#oh_ai_deviceinfosetdeviceid) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, size_t device_id) | Sets the ID of an NNRt device. This API is available only for NNRt devices.| +| [OH_AI_DeviceInfoGetDeviceId](_mind_spore.md#oh_ai_deviceinfogetdeviceid) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the ID of an NNRt device. This API is available only for NNRt devices.| +| [OH_AI_DeviceInfoSetPerformanceMode](_mind_spore.md#oh_ai_deviceinfosetperformancemode) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, [OH_AI_PerformanceMode](_mind_spore.md#oh_ai_performancemode) mode) | Sets the performance mode of an NNRt device. This API is available only for NNRt devices.| +| [OH_AI_DeviceInfoGetPerformanceMode](_mind_spore.md#oh_ai_deviceinfogetperformancemode) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the performance mode of an NNRt device. This API is available only for NNRt devices.| +| [OH_AI_DeviceInfoSetPriority](_mind_spore.md#oh_ai_deviceinfosetpriority) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, [OH_AI_Priority](_mind_spore.md#oh_ai_priority) priority) | Sets the priority of an NNRT task. This API is available only for NNRt devices.| +| [OH_AI_DeviceInfoGetPriority](_mind_spore.md#oh_ai_deviceinfogetpriority) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the priority of an NNRT task. This API is available only for NNRt devices.|