未验证 提交 bd4c2724 编写于 作者: O openharmony_ci 提交者: Gitee

!20227 翻译已完成18966+19018+18882

Merge pull request !20227 from shawn_he/18966-d
......@@ -38,7 +38,7 @@ Log level.
| Name | Value | Description |
| ----- | ------ | ------------------------------------------------------------ |
| DEBUG | 3 | Log level used to record more detailed process information than INFO logs to help developers analyze service processes and locate faults.|
| INFO | 4 | Log level used to record key service process nodes and exceptions that occur during service running,<br>Log level used to record information about unexpected exceptions, such as network signal loss and login failure.<br>These logs should be recorded by the dominant module in the service to avoid repeated logging conducted by multiple invoked modules or low-level functions.|
| INFO | 4 | Log level used to record key service process nodes and exceptions that occur during service running as well as information about unexpected exceptions, such as network signal loss and login failure.<br>These logs should be recorded by the dominant module in the service to avoid repeated logging conducted by multiple invoked modules or low-level functions.|
| WARN | 5 | Log level used to record severe, unexpected faults that have little impact on users and can be rectified by the programs themselves or through simple operations.|
| ERROR | 6 | Log level used to record program or functional errors that affect the normal running or use of the functionality and can be fixed at a high cost, for example, by resetting data.|
| FATAL | 7 | Log level used to record program or functionality crashes that cannot be rectified.
......
......@@ -5,5 +5,6 @@
- [Raw File Development](rawfile-guidelines.md)
- [Native Window Development](native-window-guidelines.md)
- [Using MindSpore Lite for Model Inference](mindspore-lite-guidelines.md)
- [Using MindSpore Lite for Offline Model Conversion and Inference](mindspore-lite-offline-model-guidelines.md)
- [Connecting the Neural Network Runtime to an AI Inference Framework](neural-network-runtime-guidelines.md)
- [Purgeable Memory Development](purgeable-memory-guidelines.md)
# Using MindSpore Lite for Offline Model Conversion and Inference
## Basic Concepts
- MindSpore Lite: a built-in AI inference engine of OpenHarmony that provides inference deployment for deep learning models.
- Neural Network Runtime (NNRt): a bridge that connects the upper-layer AI inference framework to the bottom-layer acceleration chip to implement cross-chip inference and computing of AI models.
- Offline model: a model obtained using the offline model conversion tool of the AI hardware vendor. The hardware vendor is responsible for parsing and inference of AI models.
## When to Use
The common process for MindSpore Lite AI model deployment is as follows:
- Use the MindSpore Lite model conversion tool to convert third-party models (such as ONNX and CAFFE) to `.ms` models.
- Call APIs of the MindSpore Lite inference engine to perform model inference. By specifying NNRt as the inference device, you can then use the AI hardware in the system to accelerate inference.
When MindSpore Lite + NNRt inference is used, dynamic image composition in the initial phase will introduce a certain model loading delay.
If you want to reduce the loading delay to meet the requirements of the deployment scenario, you can use offline model-based inference as an alternative. The operation procedure is as follows:
- Use the offline model conversion tool provided by the AI hardware vendor to compile an offline model in advance.
- Use the MindSpore Lite conversion tool to encapsulate the offline model as a black box into the `.ms` model.
- Pass the `.ms` model to MindSpore Lite for inference.
During inference, MindSpore Lite directly sends the offline model to the AI hardware connected to NNRt. This way, the model can be loaded without the need for online image composition, greatly reducing the model loading delay. In addition, MindSpore Lite can provide additional hardware-specific information to assist the AI hardware in model inference.
The following sections describe the offline model inference and conversion process in detail.
## Constraints
- Offline model inference can only be implemented at the NNRt backend. The AI hardware needs to connect to NNRt and supports offline model inference.
## Offline Model Conversion
### 1. Building the MindSpore Lite Release Package
Obtain the [MindSpore Lite source code](https://gitee.com/openharmony/third_party_mindspore). The source code is managed in "compressed package + patch" mode. Run the following commands to decompress the source code package and install the patch:
```bash
cd mindspore
python3 build_helper.py --in_zip_path=./mindspore-v1.8.1.zip --patch_dir=./patches/ --out_src_path=./mindspore-src
```
If the command execution is successful, the complete MindSpore Lite source code is generated in `mindspore-src/source/`.
Run the following commands to start building:
```bash
cd mindspore-src/source/
bash build.sh -I x86_64 -j 8
```
After the building is complete, you can obtain the MindSpore Lite release package from the `output/` directory in the root directory of the source code.
### 2. Writing Extended Configuration File of the Conversion Tool
The offline model comes as a black box and cannot be parsed by the conversion tool to obtain its input and output tensor information. Therefore, you need to manually configure the tensor information in the extended configuration file of the conversion tool. Based on the extended configuration, the conversion tool can then generate the `.ms` model file for encapsulating the offline model.
An example of the extended configuration is as follows:
- `[third_party_model]` in the first line is a fixed keyword that indicates the section of offline model configuration.
- The following lines exhibit the name, data type, shape, and memory format of the input and output tensors of the model respectively. Each field occupies a line and is expressed in the key-value pair format. The sequence of fields is not limited.
- Among the fields, data type and shape are mandatory, and other parameters are optional.
- Extended parameters are also provided. They are used to encapsulate custom configuration of the offline model into an `.ms` file in the the key-value pair format. The `.ms` file is passed to the AI hardware by NNRt during inference.
```text
[third_party_model]
input_names=in_0;in_1
input_dtypes=float32;float32
input_shapes=8,256,256;8,256,256,3
input_formats=NCHW;NCHW
output_names=out_0
output_dtypes=float32
output_shapes=8,64
output_formats=NCHW
extended_parameters=key_foo:value_foo;key_bar:value_bar
```
The related fields are described as follows:
- `input_names` (optional): model input name, which is in the string format. If multiple names are specified, use a semicolon (`;`) to separate them.
- `input_dtypes` (mandatory): model input data type, which is in the type format. If multiple data types are specified, use a semicolon (`;`) to separate them.
- `input_shapes` (mandatory): model input shape, which is in the integer array format. If multiple input shapes are specified, use a semicolon (`;`) to separate them.
- `input_formats` (optional): model input memory format, which is in the string format. If multiple formats are specified, use a semicolon (`;`) to separate them. The default value is `NHWC`.
- `output_names` (optional): model output name, which is in the string format. If multiple names are specified, use a semicolon (`;`) to separate them.
- `output_dtypes` (mandatory): model output data type, which is in the type format. If multiple data types are specified, use a semicolon (`;`) to separate them.
- `output_shapes` (mandatory): model output shape, which is in the integer array format. If multiple output shapes are specified, use a semicolon (`;`) to separate them.
- `output_formats` (optional): model output memory format, which is in the string format. If multiple formats are specified, use a semicolon (`;`) to separate them. The default value is `NHWC`.
- `extended_parameters` (optional): custom configuration of the inference hardware, which is in the key-value pair format. It is passed to the AI hardware through the NNRt backend during inference.
### 3. Converting an Offline Model
Decompress the MindSpore Lite release package obtained in step 1. Go to the directory where the conversion tool is located (that is, `tools/converter/converter/`), and run the following commands:
```bash
export LD_LIBRARY_PATH=${PWD}/../lib
./converter_lite --fmk=THIRDPARTY --modelFile=/path/to/your_model --configFile=/path/to/your_config --outputFile=/path/to/output_model
```
The offline model conversion is complete.
The related parameters are described as follows:
- `--fmk`: original format of the input model. `THIRDPARTY` indicates an offline model.
- `--modelFile`: path of the input model.
- `--configFile`: path of the extended configuration file. The file is used to configure offline model information.
- `--outputFile`: path of the output model. You do not need to add the file name extension. The `.ms` suffix is generated automatically.
> **NOTE**
>
> If `fmk` is set to `THIRDPARTY`, offline model conversion is performed. In this case, only the preceding four parameters and the extended configuration file take effect.
## Offline Model Inference
Offline model inference is the same as common MindSpore Lite model inference except that only NNRt devices can be added to the inference context.
For details about the MindSpore Lite model inference process, see [Using MindSpore Lite for Model Inference](./mindspore-lite-guidelines.md).
......@@ -1766,7 +1766,7 @@ promise.then(() => {
setExtraOptions(options: TCPExtraOptions, callback: AsyncCallback\<void\>): void
Sets other properties of the TCPSocket connection after successful binding of the local IP address and port number of the TLSSocket connection. This API uses an asynchronous callback to return the result.
Sets other properties of the TCPSocket connection after successful binding of the local IP address and port number of the connection. This API uses an asynchronous callback to return the result.
**System capability**: SystemCapability.Communication.NetStack
......@@ -1818,7 +1818,7 @@ tls.setExtraOptions({
setExtraOptions(options: TCPExtraOptions): Promise\<void\>
Sets other properties of the TCPSocket connection after successful binding of the local IP address and port number of the TLSSocket connection. This API uses a promise to return the result.
Sets other properties of the TCPSocket connection after successful binding of the local IP address and port number of the connection. This API uses a promise to return the result.
**System capability**: SystemCapability.Communication.NetStack
......
......@@ -5,10 +5,11 @@
Provides **Context** APIs for configuring runtime information.
**Since:**
**Since**
9
**Related Modules:**
**Related Modules**
[MindSpore](_mind_spore.md)
......@@ -18,35 +19,48 @@ Provides **Context** APIs for configuring runtime information.
### Types
| Name | Description |
| Name| Description|
| -------- | -------- |
| [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) | Defines the pointer to the MindSpore context. |
| [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) | Defines the pointer to the MindSpore device information. |
| [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) | Defines the pointer to the MindSpore context. |
| [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) | Defines the pointer to the MindSpore device information.|
### Functions
| Name | Description |
| Name| Description|
| -------- | -------- |
| [OH_AI_ContextCreate](_mind_spore.md#oh_ai_contextcreate) () | Creates a context object. |
| [OH_AI_ContextDestroy](_mind_spore.md#oh_ai_contextdestroy) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) \*context) | Destroys a context object. |
| [OH_AI_ContextSetThreadNum](_mind_spore.md#oh_ai_contextsetthreadnum) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, int32_t thread_num) | Sets the number of runtime threads. |
| [OH_AI_ContextGetThreadNum](_mind_spore.md#oh_ai_contextgetthreadnum) (const [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Obtains the number of threads. |
| [OH_AI_ContextSetThreadAffinityMode](_mind_spore.md#oh_ai_contextsetthreadaffinitymode) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, int mode) | Sets the affinity mode for binding runtime threads to CPU cores, which are categorized into little cores and big cores depending on the CPU frequency. |
| [OH_AI_ContextGetThreadAffinityMode](_mind_spore.md#oh_ai_contextgetthreadaffinitymode) (const [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Obtains the affinity mode for binding runtime threads to CPU cores. |
| [OH_AI_ContextSetThreadAffinityCoreList](_mind_spore.md#oh_ai_contextsetthreadaffinitycorelist) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, const int32_t \*core_list, size_t core_num) | Sets the list of CPU cores bound to a runtime thread. |
| [OH_AI_ContextGetThreadAffinityCoreList](_mind_spore.md#oh_ai_contextgetthreadaffinitycorelist) (const [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, size_t \*core_num) | Obtains the list of bound CPU cores. |
| [OH_AI_ContextSetEnableParallel](_mind_spore.md#oh_ai_contextsetenableparallel) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, bool is_parallel) | Sets whether to enable parallelism between operators. |
| [OH_AI_ContextGetEnableParallel](_mind_spore.md#oh_ai_contextgetenableparallel) (const [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Checks whether parallelism between operators is supported. |
| [OH_AI_ContextAddDeviceInfo](_mind_spore.md#oh_ai_contextadddeviceinfo) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Adds information about a running device. |
| [OH_AI_DeviceInfoCreate](_mind_spore.md#oh_ai_deviceinfocreate) ([OH_AI_DeviceType](_mind_spore.md#oh_ai_devicetype) device_type) | Creates a device information object. |
| [OH_AI_DeviceInfoDestroy](_mind_spore.md#oh_ai_deviceinfodestroy) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) \*device_info) | Destroys a device information instance. |
| [OH_AI_DeviceInfoSetProvider](_mind_spore.md#oh_ai_deviceinfosetprovider) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, const char \*provider) | Sets the name of a provider. |
| [OH_AI_DeviceInfoGetProvider](_mind_spore.md#oh_ai_deviceinfogetprovider) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the provider name. |
| [OH_AI_DeviceInfoSetProviderDevice](_mind_spore.md#oh_ai_deviceinfosetproviderdevice) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, const char \*device) | Sets the name of a provider device. |
| [OH_AI_DeviceInfoGetProviderDevice](_mind_spore.md#oh_ai_deviceinfogetproviderdevice) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the name of a provider device. |
| [OH_AI_DeviceInfoGetDeviceType](_mind_spore.md#oh_ai_deviceinfogetdevicetype) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the type of a provider device. |
| [OH_AI_DeviceInfoSetEnableFP16](_mind_spore.md#oh_ai_deviceinfosetenablefp16) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, bool is_fp16) | Sets whether to enable float16 inference. This function is available only for CPU/GPU devices. |
| [OH_AI_DeviceInfoGetEnableFP16](_mind_spore.md#oh_ai_deviceinfogetenablefp16) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Checks whether float16 inference is enabled. This function is available only for CPU/GPU devices. |
| [OH_AI_DeviceInfoSetFrequency](_mind_spore.md#oh_ai_deviceinfosetfrequency) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, int frequency) | Sets the NPU frequency type. This function is available only for NPU devices. |
| [OH_AI_DeviceInfoGetFrequency](_mind_spore.md#oh_ai_deviceinfogetfrequency) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the NPU frequency type. This function is available only for NPU devices. |
| [OH_AI_ContextCreate](_mind_spore.md#oh_ai_contextcreate) () | Creates a context object.|
| [OH_AI_ContextDestroy](_mind_spore.md#oh_ai_contextdestroy) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) \*context) | Destroys a context object.|
| [OH_AI_ContextSetThreadNum](_mind_spore.md#oh_ai_contextsetthreadnum) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, int32_t thread_num) | Sets the number of runtime threads.|
| [OH_AI_ContextGetThreadNum](_mind_spore.md#oh_ai_contextgetthreadnum) (const [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Obtains the number of threads.|
| [OH_AI_ContextSetThreadAffinityMode](_mind_spore.md#oh_ai_contextsetthreadaffinitymode) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, int mode) | Sets the affinity mode for binding runtime threads to CPU cores, which are classified into large, medium, and small cores based on the CPU frequency. You only need to bind the large or medium cores, but not small cores.|
| [OH_AI_ContextGetThreadAffinityMode](_mind_spore.md#oh_ai_contextgetthreadaffinitymode) (const [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Obtains the affinity mode for binding runtime threads to CPU cores.|
| [OH_AI_ContextSetThreadAffinityCoreList](_mind_spore.md#oh_ai_contextsetthreadaffinitycorelist) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, const int32_t \*core_list, size_t core_num) | Sets the list of CPU cores bound to a runtime thread.|
| [OH_AI_ContextGetThreadAffinityCoreList](_mind_spore.md#oh_ai_contextgetthreadaffinitycorelist) (const [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, size_t \*core_num) | Obtains the list of bound CPU cores.|
| [OH_AI_ContextSetEnableParallel](_mind_spore.md#oh_ai_contextsetenableparallel) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, bool is_parallel) | Sets whether to enable parallelism between operators. The setting is ineffective because the feature of this API is not yet available.|
| [OH_AI_ContextGetEnableParallel](_mind_spore.md#oh_ai_contextgetenableparallel) (const [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Checks whether parallelism between operators is supported.|
| [OH_AI_ContextAddDeviceInfo](_mind_spore.md#oh_ai_contextadddeviceinfo) ([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Attaches the custom device information to the inference context.|
| [OH_AI_DeviceInfoCreate](_mind_spore.md#oh_ai_deviceinfocreate) ([OH_AI_DeviceType](_mind_spore.md#oh_ai_devicetype) device_type) | Creates a device information object.|
| [OH_AI_DeviceInfoDestroy](_mind_spore.md#oh_ai_deviceinfodestroy) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) \*device_info) | Destroys a device information object. Note: After the device information instance is added to the context, the caller does not need to destroy it manually.|
| [OH_AI_DeviceInfoSetProvider](_mind_spore.md#oh_ai_deviceinfosetprovider) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, const char \*provider) | Sets the provider name.|
| [OH_AI_DeviceInfoGetProvider](_mind_spore.md#oh_ai_deviceinfogetprovider) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the provider name.|
| [OH_AI_DeviceInfoSetProviderDevice](_mind_spore.md#oh_ai_deviceinfosetproviderdevice) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, const char \*device) | Sets the name of a provider device.|
| [OH_AI_DeviceInfoGetProviderDevice](_mind_spore.md#oh_ai_deviceinfogetproviderdevice) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the name of a provider device.|
| [OH_AI_DeviceInfoGetDeviceType](_mind_spore.md#oh_ai_deviceinfogetdevicetype) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the device type.|
| [OH_AI_DeviceInfoSetEnableFP16](_mind_spore.md#oh_ai_deviceinfosetenablefp16) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, bool is_fp16) | Sets whether to enable float16 inference. This function is available only for CPU and GPU devices.|
| [OH_AI_DeviceInfoGetEnableFP16](_mind_spore.md#oh_ai_deviceinfogetenablefp16) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Checks whether float16 inference is enabled. This function is available only for CPU and GPU devices.|
| [OH_AI_DeviceInfoSetFrequency](_mind_spore.md#oh_ai_deviceinfosetfrequency) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, int frequency) | Sets the NPU frequency type. This function is available only for NPU devices.|
| [OH_AI_DeviceInfoGetFrequency](_mind_spore.md#oh_ai_deviceinfogetfrequency) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the NPU frequency type. This function is available only for NPU devices.|
| [OH_AI_GetAllNNRTDeviceDescs](_mind_spore.md#oh_ai_getallnnrtdevicedescs) (size_t \*num) | Obtains the descriptions of all NNRt devices in the system.|
| [OH_AI_DestroyAllNNRTDeviceDescs](_mind_spore.md#oh_ai_destroyallnnrtdevicedescs) ([NNRTDeviceDesc](_mind_spore.md#nnrtdevicedesc) \*\*desc) | Destroys the array of NNRT descriptions obtained by [OH_AI_GetAllNNRTDeviceDescs](_mind_spore.md#oh_ai_getallnnrtdevicedescs).|
| [OH_AI_GetDeviceIdFromNNRTDeviceDesc](_mind_spore.md#oh_ai_getdeviceidfromnnrtdevicedesc) (const [NNRTDeviceDesc](_mind_spore.md#nnrtdevicedesc) \*desc) | Obtains the NNRt device ID from the specified NNRt device description. Note that this ID is valid only for NNRt devices.|
| [OH_AI_GetNameFromNNRTDeviceDesc](_mind_spore.md#oh_ai_getnamefromnnrtdevicedesc) (const [NNRTDeviceDesc](_mind_spore.md#nnrtdevicedesc) \*desc) | Obtains the NNRt device name from the specified NNRt device description.|
| [OH_AI_GetTypeFromNNRTDeviceDesc](_mind_spore.md#oh_ai_gettypefromnnrtdevicedesc) (const [NNRTDeviceDesc](_mind_spore.md#nnrtdevicedesc) \*desc) | Obtains the NNRt device type from the specified NNRt device description.|
| [OH_AI_CreateNNRTDeviceInfoByName](_mind_spore.md#oh_ai_creatennrtdeviceinfobyname) (const char \*name) | Searches for the NNRt device with the specified name and creates the NNRt device information based on the information about the first found NNRt device.|
| [OH_AI_CreateNNRTDeviceInfoByType](_mind_spore.md#oh_ai_creatennrtdeviceinfobytype) ([OH_AI_NNRTDeviceType](_mind_spore.md#oh_ai_nnrtdevicetype) type) | Searches for the NNRt device with the specified type and creates the NNRt device information based on the information about the first found NNRt device.|
| [OH_AI_DeviceInfoSetDeviceId](_mind_spore.md#oh_ai_deviceinfosetdeviceid) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, size_t device_id) | Sets the ID of an NNRt device. This API is available only for NNRt devices.|
| [OH_AI_DeviceInfoGetDeviceId](_mind_spore.md#oh_ai_deviceinfogetdeviceid) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the ID of an NNRt device. This API is available only for NNRt devices.|
| [OH_AI_DeviceInfoSetPerformanceMode](_mind_spore.md#oh_ai_deviceinfosetperformancemode) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, [OH_AI_PerformanceMode](_mind_spore.md#oh_ai_performancemode) mode) | Sets the performance mode of an NNRt device. This API is available only for NNRt devices.|
| [OH_AI_DeviceInfoGetPerformanceMode](_mind_spore.md#oh_ai_deviceinfogetperformancemode) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the performance mode of an NNRt device. This API is available only for NNRt devices.|
| [OH_AI_DeviceInfoSetPriority](_mind_spore.md#oh_ai_deviceinfosetpriority) ([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, [OH_AI_Priority](_mind_spore.md#oh_ai_priority) priority) | Sets the priority of an NNRT task. This API is available only for NNRt devices.|
| [OH_AI_DeviceInfoGetPriority](_mind_spore.md#oh_ai_deviceinfogetpriority) (const [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the priority of an NNRT task. This API is available only for NNRt devices.|
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册