The inter-process communication (IPC) and remote procedure call (RPC) mechanisms are used to implement cross-process communication. The difference between them lies in that IPC uses the Binder driver to implement cross-process communication within a device, whereas RPC uses the DSoftBus driver to implement cross-process communication across devices.
Inter-process communication (IPC) and remote procedure call (RPC) are used to implement cross-process communication. IPC uses the Binder driver to implement cross-process communication within a device, whereas RPC uses the DSoftBus driver to implement cross-process communication across devices. Cross-process communication is required because each process has its own independent resources and memory space and one process is not allowed to access the resources and memory space of other processes.
The reason why cross-process communication is needed is that each process has its own independent resources and memory space and one process is not allowed to access the resources and memory space of other processes. IPC and RPC usually use the client-server model, where the client (service requester, that is, the process that requests a service) obtains the proxy of the server (service provider, that is, the process that provides the service) and uses the proxy to read and write data to implement data communication across processes. More specifically, the client constructs a proxy object of the server. The proxy object has the same functions as the server. To access a certain API of the server, you only need to access the corresponding API in the proxy object. The proxy object sends the request to the server, and the server processes the received request and returns the processing result to the proxy object through the driver. Then, the proxy object forwards the processing result to the client. The server registers system abilities (SAs) with the system ability manager (SAMgr), which manages the SAs and provides APIs for clients. To communicate with a specific SA, the client must obtain the proxy of the SA from SAMgr.
> **NOTE**
> The applications in the stage model cannot use IPC or RPC directly, and must use the following capabilities to implement related service scenarios:
>- [Background services](../application-models/background-services.md): use IPC to implement service invocation across processes.
>- [Multi-device collaboration](../application-models/hop-multi-device-collaboration.md): uses RPC to call remote interfaces and transfer data.
In the following sections, proxy represents the service requester, and stub represents the service provider.

## Implementation Principles
IPC and RPC usually use the client-server model, where the client (service requester, that is, the process that requests a service) obtains the proxy of the server (service provider, that is, the process that provides the service) and uses the proxy to read and write data to implement data communication across processes. More specifically, the client constructs a proxy object of the server. The proxy object has the same functions as the server. To access a certain API of the server, you only need to access the corresponding API in the proxy object. The proxy object sends the request to the server, and the server processes the received request and returns the processing result to the proxy object through the driver. Then, the proxy object forwards the processing result to the client. The server registers system abilities (SAs) with the system ability manager (SAMgr), which manages the SAs and provides APIs for clients. To communicate with a specific SA, the client must obtain the proxy of the SA from SAMgr. In the following sections, proxy represents the service requester, and stub represents the service provider.
## Constraints
![IPC & RPC communication mechanisms] (figures/IPC_RPC_communication.PNG)
- During cross-process communication within a single device, the maximum amount of data to be transmitted is about 1 MB. If the amount of data to be transmitted exceeds this limit, use the [anonymous shared memory](https://gitee.com/openharmony/docs/blob/master/en/application-dev/reference/apis/js-apis-rpc.md#ashmem8).
- Subscription to death notifications of anonymous stub objects (not registered with SAMgr) is prohibited in RPC.
- During cross-process communication across processes, a proxy object cannot be passed back to the device that hosts the stub object pointed by the proxy object. That is, the proxy object pointing to the stub object of the remote device cannot be passed across processes twice on the local device.
## **Recommendations**
First, compile an API class and define message codes in the API class for both communication parties to identify operations. Unimplemented APIs are allowed in the API class because it must be inherited by both communication parties and its inheritance classes cannot be abstract classes. When inheriting the API class, both communication parties must implement the unimplemented APIs, so as to make sure that the inheritance classes are not abstract classes.
Then, implement the API class specific to the stub, and override the **AsObject** and **OnRemoteRequest** APIs. In addition, compile the proxy to implement the APIs in the API class and override the **AsObject** API. You can also encapsulate an additional API for calling **SendRequest** to send data to the peer.
After the preceding operations are done, register a system ability (SA) with SAMgr. Note that the registration should be completed in the process that hosts the stub. Then, obtain the proxy from SAMgr as needed to implement cross-process communication with the stub.
Related steps are as follows:
- Implementing the API class: Inherit **IRemoteBroker**, define message codes, and declare APIs that are not implemented in the API class.
## Constraints
- Implementing the service provider (stub): Inherit **IRemoteStub** or **RemoteObject**, and override the **AsObject** and **OnRemoteRequest** APIs.
- Implementing the service requester (proxy): Inherit **IRemoteProxy** or **RemoteProxy**, override the **AsObject** API, and encapsulate the required API to call **SendRequest**.
- Registering the SA: Apply for a unique ID for the SA, and register the SA with SAMgr.
- Obtaining the SA: Obtain the proxy based on the SA ID and device ID, and use the proxy to communicate with the remote end.
- A maximum of 1 MB data can be transferred in cross-process communication on a single device. If the amount of data to be transmitted is larger than 1 MB, use [anonymous shared memory](../reference/apis/js-apis-rpc.md#ashmem8).
## Related Modules
- Subscription to death notifications of anonymous stub objects (not registered with SAMgr) is prohibited in RPC.
[Distributed Ability Manager Service Framework](https://gitee.com/openharmony/ability_dmsfwk)
- During cross-process communication across processes, a proxy object cannot be passed back to the device that hosts the stub object pointed by the proxy object. That is, the proxy object pointing to the stub object of the remote device cannot be passed across processes twice on the local device.
| state | [VoNRState](#vonrstate10) | Yes | Status of the VoNR switch. |
| state | [VoNRState](#vonrstate10) | Yes | Status of the VoNR switch. |
| callback | AsyncCallback<boolean> | Yes | Callback used to return the result. Callback used to return the result. The value **true** indicates that the operation is successful, and value **false** indicates the opposite.|
| callback | AsyncCallback<void> | Yes | Callback used to return the result.|
The **hilog** module allows your applications or services to output logs based on the specified type, level, and format string. Such logs help you learn the running status of applications and better debug programs.
The HiLog subsystem allows your applications or services to output logs based on the specified type, level, and format string. Such logs help you learn the running status of applications and better debug programs.
> **NOTE**<br>
> **NOTE**
>
> The initial APIs of this module are supported since API version 7. Newly added APIs will be marked with a superscript to indicate their earliest API version.
> The initial APIs of this module are supported since API version 7. Newly added APIs will be marked with a superscript to indicate their earliest API version.
## Modules to Import
## Modules to Import
...
@@ -23,7 +24,7 @@ Checks whether logs are printable based on the specified service domain, log tag
...
@@ -23,7 +24,7 @@ Checks whether logs are printable based on the specified service domain, log tag
| domain | number | Yes | Service domain of logs. The value ranges from **0x0** to **0xFFFF**. You can define the value within your application as required.|
| domain | number | Yes | Service domain of logs. The value ranges from **0x0** to **0xFFFF**.<br>You can define the value as required.|
| tag | string | Yes | Log tag in the string format. You are advised to use this parameter to identify a particular service behavior or the class holding the ongoing method.|
| tag | string | Yes | Log tag in the string format. You are advised to use this parameter to identify a particular service behavior or the class holding the ongoing method.|
| DEBUG | 3 | Log level used to record more detailed process information than INFO logs to help developers analyze service processes and locate faults.|
| DEBUG | 3 | Log level used to record more detailed process information than INFO logs to help developers analyze service processes and locate faults.|
| INFO | 4 | Log level used to record key service process nodes and exceptions that occur during service running,<br>for example, no network signal or login failure.<br>These logs should be recorded by the dominant module in the service to avoid repeated logging conducted by multiple invoked modules or low-level functions.|
| INFO | 4 | Log level used to record key service process nodes and exceptions that occur during service running,<br>for example, no network signal or login failure.<br>These logs should be recorded by the dominant module in the service to avoid repeated logging conducted by multiple invoked modules or low-level functions.|
...
@@ -67,9 +68,9 @@ DEBUG logs are not recorded in official versions by default. They are available
...
@@ -67,9 +68,9 @@ DEBUG logs are not recorded in official versions by default. They are available
| domain | number | Yes | Service domain of logs. The value ranges from **0x0** to **0xFFFF**. You can define the value within your application as required.|
| domain | number | Yes | Service domain of logs. The value ranges from **0x0** to **0xFFFF**.<br>You can define the value as required.|
| tag | string | Yes | Log tag in the string format. You are advised to use this parameter to identify a particular service behavior or the class holding the ongoing method.|
| tag | string | Yes | Log tag in the string format. You are advised to use this parameter to identify a particular service behavior or the class holding the ongoing method.|
| format | string | Yes | Format string used to output logs in a specified format. It can contain several parameters, where the parameter type and privacy identifier are mandatory.<br>Parameters labeled **{public}** are public data and are displayed in plaintext; parameters labeled **{private}** (default value) are private data and are filtered by **<private>**.|
| format | string | Yes | Format string used to output logs in a specified format. It can contain several elements, where the parameter type and privacy identifier are mandatory.<br>Parameters labeled **{public}** are public data and are displayed in plaintext; parameters labeled **{private}** (default value) are private data and are filtered by **\<private>**.|
| args | any[] | Yes | Variable-length parameter list corresponding to the format string. The number and type of parameters must map to the identifier in the format string.|
| args | any[] | Yes | Variable-length parameter list corresponding to the format string. The number and type of parameters must map to the identifier in the format string.|
| domain | number | Yes | Service domain of logs. The value ranges from **0x0** to **0xFFFF**. You can define the value within your application as required.|
| domain | number | Yes | Service domain of logs. The value ranges from **0x0** to **0xFFFF**.<br>You can define the value as required. |
| tag | string | Yes | Log tag in the string format. You are advised to use this parameter to identify a particular service behavior or the class holding the ongoing method.|
| tag | string | Yes | Log tag in the string format. You are advised to use this parameter to identify a particular service behavior or the class holding the ongoing method.|
| format | string | Yes | Format string used to output logs in a specified format. It can contain several parameters, where the parameter type and privacy identifier are mandatory.<br>Parameters labeled **{public}** are public data and are displayed in plaintext; parameters labeled **{private}** (default value) are private data and are filtered by **<private>**.|
| format | string | Yes | Format string used to output logs in a specified format. It can contain several elements, where the parameter type and privacy identifier are mandatory.<br>Parameters labeled **{public}** are public data and are displayed in plaintext; parameters labeled **{private}** (default value) are private data and are filtered by **\<private>**.|
| args | any[] | Yes | Variable-length parameter list corresponding to the format string. The number and type of parameters must map to the identifier in the format string.|
| args | any[] | Yes | Variable-length parameter list corresponding to the format string. The number and type of parameters must map to the identifier in the format string.|
| domain | number | Yes | Service domain of logs. The value ranges from **0x0** to **0xFFFF**. You can define the value within your application as required.|
| domain | number | Yes | Service domain of logs. The value ranges from **0x0** to **0xFFFF**.<br>You can define the value as required. |
| tag | string | Yes | Log tag in the string format. You are advised to use this parameter to identify a particular service behavior or the class holding the ongoing method.|
| tag | string | Yes | Log tag in the string format. You are advised to use this parameter to identify a particular service behavior or the class holding the ongoing method.|
| format | string | Yes | Format string used to output logs in a specified format. It can contain several parameters, where the parameter type and privacy identifier are mandatory.<br>Parameters labeled **{public}** are public data and are displayed in plaintext; parameters labeled **{private}** (default value) are private data and are filtered by **<private>**.|
| format | string | Yes | Format string used to output logs in a specified format. It can contain several elements, where the parameter type and privacy identifier are mandatory.<br>Parameters labeled **{public}** are public data and are displayed in plaintext; parameters labeled **{private}** (default value) are private data and are filtered by **\<private>**.|
| args | any[] | Yes | Variable-length parameter list corresponding to the format string. The number and type of parameters must map to the identifier in the format string.|
| args | any[] | Yes | Variable-length parameter list corresponding to the format string. The number and type of parameters must map to the identifier in the format string.|
| domain | number | Yes | Service domain of logs. The value ranges from **0x0** to **0xFFFF**. You can define the value within your application as required.|
| domain | number | Yes | Service domain of logs. The value ranges from **0x0** to **0xFFFF**.<br>You can define the value as required. |
| tag | string | Yes | Log tag in the string format. You are advised to use this parameter to identify a particular service behavior or the class holding the ongoing method.|
| tag | string | Yes | Log tag in the string format. You are advised to use this parameter to identify a particular service behavior or the class holding the ongoing method.|
| format | string | Yes | Format string used to output logs in a specified format. It can contain several parameters, where the parameter type and privacy identifier are mandatory.<br>Parameters labeled **{public}** are public data and are displayed in plaintext; parameters labeled **{private}** (default value) are private data and are filtered by **<private>**.|
| format | string | Yes | Format string used to output logs in a specified format. It can contain several elements, where the parameter type and privacy identifier are mandatory.<br>Parameters labeled **{public}** are public data and are displayed in plaintext; parameters labeled **{private}** (default value) are private data and are filtered by **\<private>**.|
| args | any[] | Yes | Variable-length parameter list corresponding to the format string. The number and type of parameters must map to the identifier in the format string.|
| args | any[] | Yes | Variable-length parameter list corresponding to the format string. The number and type of parameters must map to the identifier in the format string.|
| domain | number | Yes | Service domain of logs. The value ranges from **0x0** to **0xFFFF**. You can define the value within your application as required.|
| domain | number | Yes | Service domain of logs. The value ranges from **0x0** to **0xFFFF**.<br>You can define the value as required. |
| tag | string | Yes | Log tag in the string format. You are advised to use this parameter to identify a particular service behavior or the class holding the ongoing method.|
| tag | string | Yes | Log tag in the string format. You are advised to use this parameter to identify a particular service behavior or the class holding the ongoing method.|
| format | string | Yes | Format string used to output logs in a specified format. It can contain several parameters, where the parameter type and privacy identifier are mandatory.<br>Parameters labeled **{public}** are public data and are displayed in plaintext; parameters labeled **{private}** (default value) are private data and are filtered by **<private>**.|
| format | string | Yes | Format string used to output logs in a specified format. It can contain several elements, where the parameter type and privacy identifier are mandatory.<br>Parameters labeled **{public}** are public data and are displayed in plaintext; parameters labeled **{private}** (default value) are private data and are filtered by **\<private>**.|
| args | any[] | Yes | Variable-length parameter list corresponding to the format string. The number and type of parameters must map to the identifier in the format string.|
| args | any[] | Yes | Variable-length parameter list corresponding to the format string. The number and type of parameters must map to the identifier in the format string.|
**Example**
**Example**
...
@@ -209,3 +210,43 @@ If `"hello"` is filled in `%{public}s` and `3` in `%{private}d`, the output log
...
@@ -209,3 +210,43 @@ If `"hello"` is filled in `%{public}s` and `3` in `%{private}d`, the output log
```
```
08-05 12:21:47.579 2695-2703/com.example.myapplication F 00001/testTag: hello World <private>
08-05 12:21:47.579 2695-2703/com.example.myapplication F 00001/testTag: hello World <private>
```
```
## Parameter Format
Parameters in the log are printed in the following format:
%[private flag]specifier
| Privacy Flag| Description|
| ------------ | ---- |
| Unspecified | The default value is **private**, indicating that parameters in plaintext are not printed.|
| private | Prints private parameters.|
| public | Prints parameters in plaintext.|
| Specifier| Description| Example|
| ------------ | ---- | ---- |
| d/i | Prints logs of the **number** and **bigint** types.| 123 |
| s | Prints logs of the **string undefined bool** and **null** types.| "123" |
The **inputDevice** module implements listening for connection, disconnection, and update events of input devices and displays information about input devices. For example, it can be used to listen for mouse insertion and removal and obtain information such as the ID, name, and pointer speed of the mouse.
The **inputDevice** module allows you to listen for hot swap events of input devices and query information about input devices.
> **NOTE**<br>
> **NOTE**
>
> The initial APIs of this module are supported since API version 8. Newly added APIs will be marked with a superscript to indicate their earliest API version.
> The initial APIs of this module are supported since API version 8. Newly added APIs will be marked with a superscript to indicate their earliest API version.
## Modules to Import
## Modules to Import
...
@@ -10,6 +11,7 @@ The **inputDevice** module implements listening for connection, disconnection, a
...
@@ -10,6 +11,7 @@ The **inputDevice** module implements listening for connection, disconnection, a
| deviceId | number | Yes | ID of the input device. |
| deviceId | number | Yes | ID of the input device. |
| callback | AsyncCallback<[InputDeviceData](#inputdevicedata)> | Yes | Callback used to return the result, which is an **InputDeviceData** object.|
| callback | AsyncCallback<[InputDeviceData](#inputdevicedata)> | Yes | Callback used to return the result, which is an **InputDeviceData** object.|
...
@@ -316,14 +309,14 @@ This API is deprecated since API version 9. You are advised to use [inputDevice.
...
@@ -316,14 +309,14 @@ This API is deprecated since API version 9. You are advised to use [inputDevice.
**Parameters**
**Parameters**
| Name | Type | Mandatory | Description |
| Name | Type | Mandatory| Description |
| -------- | ------ | ---- | ------------ |
| -------- | ------ | ---- | ------------ |
| deviceId | number | Yes | ID of the input device.|
| deviceId | number | Yes | ID of the input device.|
| serviceType | string | Yes| Type of the mDNS service. The value is in the format of **\_\<name>.\<tcp/udp>**, where **name** contains a maximum of 63 characters excluding periods (.). |
| serviceType | string | Yes| Type of the mDNS service. The value is in the format of **\_\<name>.<_tcp/_udp>**, where **name** contains a maximum of 63 characters excluding periods (.). |
| serviceName | string | Yes| Name of the mDNS service. |
| serviceName | string | Yes| Name of the mDNS service. |
| port | number | No| Port number of the mDNS server. |
| port | number | No| Port number of the mDNS server. |
| host | [NetAddress](js-apis-net-connection.md#netaddress) | No| IP address of the device that provides the mDNS service. The IP address is not effective when an mDNS service is added or removed. |
| host | [NetAddress](js-apis-net-connection.md#netaddress) | No| IP address of the device that provides the mDNS service. The IP address is not effective when an mDNS service is added or removed. |
| isImmediate<sup>10+</sup> | boolean | No | Whether to hibernate a device immediately. If this parameter is not specified, the default value **false** is used. The system automatically determines when to enter the hibernation state.<br>**NOTE**: This parameter is supported since API version 10.|
**Error codes**
**Error codes**
For details about the error codes, see [Power Manager Error Codes](../errorcodes/errorcode-power.md).
For details about the error codes, see [Power Manager Error Codes](../errorcodes/errorcode-power.md).
| businessId | string | Yes | Unique service ID registered on the multimodal side. It corresponds to **businessId** in the **ability_launch_config.json** file.|
| businessKey| string | Yes | Unique service ID registered on the multimodal side. It corresponds to **businessId** in the **ability_launch_config.json** file.|
| delay | number | Yes | Delay for starting an ability using the shortcut key, in ms.|
| delay | number | Yes | Delay for starting an ability using the shortcut key, in ms.|
| callback | AsyncCallback<void> | Yes | Callback used to return the result. |
| callback | AsyncCallback<void> | Yes | Callback used to return the result. |
| businessId | string | Yes | Unique service ID registered on the multimodal side. It corresponds to **businessId** in the **ability_launch_config.json** file.|
| businessKey| string | Yes | Unique service ID registered on the multimodal side. It corresponds to **businessId** in the **ability_launch_config.json** file.|
| delay | number | Yes | Delay for starting an ability using the shortcut key, in ms.|
| delay | number | Yes | Delay for starting an ability using the shortcut key, in ms.|
Sets other properties of the TCPSocket connection after successful binding of the local IP address and port number of the TLSSocket connection. This API uses an asynchronous callback to return the result.
Sets other properties of the TCPSocket connection after successful binding of the local IP address and port number of the connection. This API uses an asynchronous callback to return the result.
Sets other properties of the TCPSocket connection after successful binding of the local IP address and port number of the TLSSocket connection. This API uses a promise to return the result.
Sets other properties of the TCPSocket connection after successful binding of the local IP address and port number of the connection. This API uses a promise to return the result.
| type | string | Yes | Type of the event to subscribe to.<br/>**message**: message receiving event|
| type | string | Yes | Type of the event to subscribe to. **message**: message receiving event|
| callback | Callback<{message:ArrayBuffer,remoteInfo:[SocketRemoteInfo](#socketremoteinfo)}> | No | Callback used to return the result. **message**: received message.<br>**remoteInfo**: socket connection information.|
| callback | Callback<{message:ArrayBuffer,remoteInfo:[SocketRemoteInfo](#socketremoteinfo)}> | No | Callback used to return the result. **message**: received message.<br>**remoteInfo**: socket connection information.|
| [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) | Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length. |
| [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) | Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length.|
| [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) | Defines dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**. |
| [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) | Defines dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**. |
| [OH_AI_CallBackParam](_o_h___a_i___call_back_param.md) | Defines the operator information passed in a callback. |
| [OH_AI_CallBackParam](_o_h___a_i___call_back_param.md) | Defines the operator information passed in a callback. |
| [OH_AI_ContextHandle](#oh_ai_contexthandle) | Defines the pointer to the MindSpore context. |
| [OH_AI_ContextHandle](#oh_ai_contexthandle) | Defines the pointer to the MindSpore context. |
| [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) | Defines the pointer to the MindSpore device information. |
| [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) | Defines the pointer to the MindSpore device information. |
| [OH_AI_DataType](#oh_ai_datatype) | Declares data types supported by MSTensor. |
| [OH_AI_DataType](#oh_ai_datatype) | Declares data types supported by MSTensor. |
| [OH_AI_Format](#oh_ai_format) | Declares data formats supported by MSTensor. |
| [OH_AI_Format](#oh_ai_format) | Declares data formats supported by MSTensor. |
| [OH_AI_ModelHandle](#oh_ai_modelhandle) | Defines the pointer to a model object. |
| [OH_AI_ModelHandle](#oh_ai_modelhandle) | Defines the pointer to a model object. |
| [OH_AI_TensorHandleArray](#oh_ai_tensorhandlearray)| Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length. |
| [OH_AI_TensorHandleArray](#oh_ai_tensorhandlearray) | Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length.|
| **OH_AI_ShapeInfo** | Defines dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**. |
| [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) | Defines dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**. |
| [OH_AI_CallBackParam](#oh_ai_callbackparam) | Defines the operator information passed in a callback. |
| [OH_AI_CallBackParam](#oh_ai_callbackparam) | Defines the operator information passed in a callback. |
| [OH_AI_KernelCallBack](#oh_ai_kernelcallback)) (const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) inputs, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) outputs, const [OH_AI_CallBackParam](_o_h___a_i___call_back_param.md) kernel_Info) | Defines the pointer to a callback. |
| [OH_AI_KernelCallBack](#oh_ai_kernelcallback) | Defines the pointer to a callback. |
| [OH_AI_Status](#oh_ai_status) | Defines MindSpore status codes. |
| [OH_AI_Status](#oh_ai_status) | Defines MindSpore status codes. |
| [OH_AI_TensorHandle](#oh_ai_tensorhandle) | Defines the handle of a tensor object. |
| [OH_AI_TensorHandle](#oh_ai_tensorhandle) | Defines the handle of a tensor object. |
| [OH_AI_ModelType](#oh_ai_modeltype) | Defines model file types. |
| [OH_AI_ModelType](#oh_ai_modeltype) | Defines model file types. |
| [OH_AI_DeviceType](#oh_ai_devicetype) | Defines the supported device types. |
| [OH_AI_DeviceType](#oh_ai_devicetype) | Defines the supported device types. |
| [OH_AI_ContextCreate](#oh_ai_contextcreate)() | Creates a context object. |
| [OH_AI_ContextCreate](#oh_ai_contextcreate)() | Creates a context object. |
| [OH_AI_ContextDestroy](#oh_ai_contextdestroy)([OH_AI_ContextHandle](#oh_ai_contexthandle)\*context) | Destroys a context object. |
| [OH_AI_ContextDestroy](#oh_ai_contextdestroy)([OH_AI_ContextHandle](#oh_ai_contexthandle)\*context) | Destroys a context object. |
| [OH_AI_ContextSetThreadNum](#oh_ai_contextsetthreadnum)([OH_AI_ContextHandle](#oh_ai_contexthandle) context, int32_t thread_num) | Sets the number of runtime threads. |
| [OH_AI_ContextSetThreadNum](#oh_ai_contextsetthreadnum)([OH_AI_ContextHandle](#oh_ai_contexthandle) context, int32_t thread_num) | Sets the number of runtime threads. |
| [OH_AI_ContextGetThreadNum](#oh_ai_contextgetthreadnum)(const[OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Obtains the number of threads. |
| [OH_AI_ContextGetThreadNum](#oh_ai_contextgetthreadnum)(const[OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Obtains the number of threads. |
| [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode)([OH_AI_ContextHandle](#oh_ai_contexthandle) context, int mode) | Sets the affinity mode for binding runtime threads to CPU cores, which are categorized into little cores and big cores depending on the CPU frequency. |
| [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode)([OH_AI_ContextHandle](#oh_ai_contexthandle) context, int mode) | Sets the affinity mode for binding runtime threads to CPU cores, which are classified into large, medium, and small cores based on the CPU frequency. You only need to bind the large or medium cores, but not small cores.|
| [OH_AI_ContextGetThreadAffinityMode](#oh_ai_contextgetthreadaffinitymode)(const[OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Obtains the affinity mode for binding runtime threads to CPU cores. |
| [OH_AI_ContextGetThreadAffinityMode](#oh_ai_contextgetthreadaffinitymode)(const[OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Obtains the affinity mode for binding runtime threads to CPU cores. |
| [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist)([OH_AI_ContextHandle](#oh_ai_contexthandle) context, const int32_t \*core_list, size_t core_num) | Sets the list of CPU cores bound to a runtime thread. |
| [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist)([OH_AI_ContextHandle](#oh_ai_contexthandle) context, const int32_t \*core_list, size_t core_num) | Sets the list of CPU cores bound to a runtime thread. |
| [OH_AI_ContextGetThreadAffinityCoreList](#oh_ai_contextgetthreadaffinitycorelist)(const[OH_AI_ContextHandle](#oh_ai_contexthandle) context, size_t \*core_num) | Obtains the list of bound CPU cores. |
| [OH_AI_ContextGetThreadAffinityCoreList](#oh_ai_contextgetthreadaffinitycorelist)(const[OH_AI_ContextHandle](#oh_ai_contexthandle) context, size_t \*core_num) | Obtains the list of bound CPU cores. |
| [OH_AI_ContextSetEnableParallel](#oh_ai_contextsetenableparallel)([OH_AI_ContextHandle](#oh_ai_contexthandle) context, bool is_parallel) | Sets whether to enable parallelism between operators. |
| [OH_AI_ContextSetEnableParallel](#oh_ai_contextsetenableparallel)([OH_AI_ContextHandle](#oh_ai_contexthandle) context, bool is_parallel) | Sets whether to enable parallelism between operators. The setting is ineffective because the feature of this API is not yet available. |
| [OH_AI_ContextGetEnableParallel](#oh_ai_contextgetenableparallel)(const[OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Checks whether parallelism between operators is supported. |
| [OH_AI_ContextGetEnableParallel](#oh_ai_contextgetenableparallel)(const[OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Checks whether parallelism between operators is supported. |
| [OH_AI_ContextAddDeviceInfo](#oh_ai_contextadddeviceinfo)([OH_AI_ContextHandle](#oh_ai_contexthandle) context, [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Adds information about a running device. |
| [OH_AI_ContextAddDeviceInfo](#oh_ai_contextadddeviceinfo)([OH_AI_ContextHandle](#oh_ai_contexthandle) context, [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Attaches the custom device information to the inference context. |
| [OH_AI_DeviceInfoCreate](#oh_ai_deviceinfocreate)([OH_AI_DeviceType](#oh_ai_devicetype) device_type) | Creates a device information object. |
| [OH_AI_DeviceInfoCreate](#oh_ai_deviceinfocreate)([OH_AI_DeviceType](#oh_ai_devicetype) device_type) | Creates a device information object. |
| [OH_AI_DeviceInfoDestroy](#oh_ai_deviceinfodestroy)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle)\*device_info) | Destroys a device information instance. |
| [OH_AI_DeviceInfoDestroy](#oh_ai_deviceinfodestroy)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle)\*device_info) | Destroys a device information object. Note: After the device information instance is added to the context, the caller does not need to destroy it manually. |
| [OH_AI_DeviceInfoSetProvider](#oh_ai_deviceinfosetprovider)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, const char \*provider) | Sets the name of a provider. |
| [OH_AI_DeviceInfoGetProvider](#oh_ai_deviceinfogetprovider)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the provider name. |
| [OH_AI_DeviceInfoGetProvider](#oh_ai_deviceinfogetprovider)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the provider name. |
| [OH_AI_DeviceInfoSetProviderDevice](#oh_ai_deviceinfosetproviderdevice)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, const char \*device) | Sets the name of a provider device. |
| [OH_AI_DeviceInfoSetProviderDevice](#oh_ai_deviceinfosetproviderdevice)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, const char \*device) | Sets the name of a provider device. |
| [OH_AI_DeviceInfoGetProviderDevice](#oh_ai_deviceinfogetproviderdevice)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the name of a provider device. |
| [OH_AI_DeviceInfoGetProviderDevice](#oh_ai_deviceinfogetproviderdevice)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the name of a provider device. |
| [OH_AI_DeviceInfoGetDeviceType](#oh_ai_deviceinfogetdevicetype)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the type of a provider device. |
| [OH_AI_DeviceInfoGetDeviceType](#oh_ai_deviceinfogetdevicetype)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the device type. |
| [OH_AI_DeviceInfoSetEnableFP16](#oh_ai_deviceinfosetenablefp16)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, bool is_fp16) | Sets whether to enable float16 inference. This function is available only for CPU/GPU devices. |
| [OH_AI_DeviceInfoSetEnableFP16](#oh_ai_deviceinfosetenablefp16)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, bool is_fp16) | Sets whether to enable float16 inference. This function is available only for CPU and GPU devices. |
| [OH_AI_DeviceInfoGetEnableFP16](#oh_ai_deviceinfogetenablefp16)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Checks whether float16 inference is enabled. This function is available only for CPU/GPU devices. |
| [OH_AI_DeviceInfoGetEnableFP16](#oh_ai_deviceinfogetenablefp16)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Checks whether float16 inference is enabled. This function is available only for CPU and GPU devices. |
| [OH_AI_DeviceInfoSetFrequency](#oh_ai_deviceinfosetfrequency)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, int frequency) | Sets the NPU frequency type. This function is available only for NPU devices. |
| [OH_AI_DeviceInfoSetFrequency](#oh_ai_deviceinfosetfrequency)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, int frequency) | Sets the NPU frequency type. This function is available only for NPU devices. |
| [OH_AI_DeviceInfoGetFrequency](#oh_ai_deviceinfogetfrequency)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the NPU frequency type. This function is available only for NPU devices. |
| [OH_AI_DeviceInfoGetFrequency](#oh_ai_deviceinfogetfrequency)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the NPU frequency type. This function is available only for NPU devices. |
| [OH_AI_GetAllNNRTDeviceDescs](#oh_ai_getallnnrtdevicedescs)(size_t\*num) | Obtains the descriptions of all NNRt devices in the system. |
| [OH_AI_DestroyAllNNRTDeviceDescs](#oh_ai_destroyallnnrtdevicedescs)([NNRTDeviceDesc](#nnrtdevicedesc)\*\*desc) | Destroys the array of NNRT descriptions obtained by [OH_AI_GetAllNNRTDeviceDescs](#oh_ai_getallnnrtdevicedescs).|
| [OH_AI_GetDeviceIdFromNNRTDeviceDesc](#oh_ai_getdeviceidfromnnrtdevicedesc)(const[NNRTDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device ID from the specified NNRt device description. Note that this ID is valid only for NNRt devices.|
| [OH_AI_GetNameFromNNRTDeviceDesc](#oh_ai_getnamefromnnrtdevicedesc)(const[NNRTDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device name from the specified NNRt device description. |
| [OH_AI_GetTypeFromNNRTDeviceDesc](#oh_ai_gettypefromnnrtdevicedesc)(const[NNRTDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device type from the specified NNRt device description. |
| [OH_AI_CreateNNRTDeviceInfoByName](#oh_ai_creatennrtdeviceinfobyname)(const char \*name) | Searches for the NNRt device with the specified name and creates the NNRt device information based on the information about the first found NNRt device.|
| [OH_AI_CreateNNRTDeviceInfoByType](#oh_ai_creatennrtdeviceinfobytype)([OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype) type) | Searches for the NNRt device with the specified type and creates the NNRt device information based on the information about the first found NNRt device.|
| [OH_AI_DeviceInfoSetDeviceId](#oh_ai_deviceinfosetdeviceid)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, size_t device_id) | Sets the ID of an NNRt device. This API is available only for NNRt devices. |
| [OH_AI_DeviceInfoGetDeviceId](#oh_ai_deviceinfogetdeviceid)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the ID of an NNRt device. This API is available only for NNRt devices. |
| [OH_AI_DeviceInfoSetPerformanceMode](#oh_ai_deviceinfosetperformancemode)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, [OH_AI_PerformanceMode](#oh_ai_performancemode) mode) | Sets the performance mode of an NNRt device. This API is available only for NNRt devices. |
| [OH_AI_DeviceInfoGetPerformanceMode](#oh_ai_deviceinfogetperformancemode)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the performance mode of an NNRt device. This API is available only for NNRt devices. |
| [OH_AI_DeviceInfoSetPriority](#oh_ai_deviceinfosetpriority)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, [OH_AI_Priority](#oh_ai_priority) priority) | Sets the priority of an NNRT task. This API is available only for NNRt devices. |
| [OH_AI_DeviceInfoGetPriority](#oh_ai_deviceinfogetpriority)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the priority of an NNRT task. This API is available only for NNRt devices. |
| [OH_AI_ModelCreate](#oh_ai_modelcreate)() | Creates a model object. |
| [OH_AI_ModelCreate](#oh_ai_modelcreate)() | Creates a model object. |
| [OH_AI_ModelDestroy](#oh_ai_modeldestroy)([OH_AI_ModelHandle](#oh_ai_modelhandle)\*model) | Destroys a model object. |
| [OH_AI_ModelDestroy](#oh_ai_modeldestroy)([OH_AI_ModelHandle](#oh_ai_modelhandle)\*model) | Destroys a model object. |
| [OH_AI_ModelBuild](#oh_ai_modelbuild)([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const void \*model_data, size_t data_size, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context) | Loads and builds a MindSpore model from the memory buffer. |
| [OH_AI_ModelBuild](#oh_ai_modelbuild)([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const void \*model_data, size_t data_size, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context) | Loads and builds a MindSpore model from the memory buffer. |
...
@@ -113,12 +134,12 @@ Provides APIs related to MindSpore Lite model inference.
...
@@ -113,12 +134,12 @@ Provides APIs related to MindSpore Lite model inference.
| [OH_AI_TensorGetName](#oh_ai_tensorgetname)(const[OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the name of a tensor. |
| [OH_AI_TensorGetName](#oh_ai_tensorgetname)(const[OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the tensor name. |
| [OH_AI_TensorSetDataType](#oh_ai_tensorsetdatatype)([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, [OH_AI_DataType](#oh_ai_datatype) type) | Sets the data type of a tensor. |
| [OH_AI_TensorSetDataType](#oh_ai_tensorsetdatatype)([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, [OH_AI_DataType](#oh_ai_datatype) type) | Sets the data type of a tensor. |
| [OH_AI_TensorGetDataType](#oh_ai_tensorgetdatatype)(const[OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the data type of a tensor. |
| [OH_AI_TensorGetDataType](#oh_ai_tensorgetdatatype)(const[OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the tensor type. |
| [OH_AI_TensorSetShape](#oh_ai_tensorsetshape)([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, const int64_t \*shape, size_t shape_num) | Sets the shape of a tensor. |
| [OH_AI_TensorGetShape](#oh_ai_tensorgetshape)(const[OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, size_t \*shape_num) | Obtains the shape of a tensor. |
This pointer is used to set the two callback functions in [OH_AI_ModelPredict](#oh_ai_modelpredict). Each callback function must contain three parameters, where **inputs** and **outputs** indicate the input and output tensors of the operator, and **kernel_Info** indicates information about the current operator. You can use the callback functions to monitor the operator execution status, for example, operator execution time and the operator correctness.
This pointer is used to set the two callback functions in [OH_AI_ModelPredict](#oh_ai_modelpredict). Each callback function must contain three parameters, where **inputs** and **outputs** indicate the input and output tensors of the operator, and **kernel_Info** indicates information about the current operator. You can use the callback functions to monitor the operator execution status, for example, operator execution time and the operator correctness.
...
@@ -222,7 +275,9 @@ This pointer is used to set the two callback functions in [OH_AI_ModelPredict](#
...
@@ -222,7 +275,9 @@ This pointer is used to set the two callback functions in [OH_AI_ModelPredict](#
```
```
typedef void* OH_AI_ModelHandle
typedef void* OH_AI_ModelHandle
```
```
**Description**<br>
**Description**
Defines the pointer to a model object.
Defines the pointer to a model object.
...
@@ -230,19 +285,71 @@ Defines the pointer to a model object.
...
@@ -230,19 +285,71 @@ Defines the pointer to a model object.
| OH_AI_DEVICETYPE_INVALID | Invalid device type |
### OH_AI_Format
### OH_AI_Format
...
@@ -341,28 +458,30 @@ Defines the supported device types.
...
@@ -341,28 +458,30 @@ Defines the supported device types.
```
```
enum OH_AI_Format
enum OH_AI_Format
```
```
**Description**<br>
**Description**
Declares data formats supported by MSTensor.
Declares data formats supported by MSTensor.
| Name | Description |
| Value | Description |
| -------- | -------- |
| ------------------- | ---------------- |
| OH_AI_FORMAT_NCHW | NCHW format |
| OH_AI_FORMAT_NCHW | Tensor data is stored in the sequence of batch number N, channel C, height H, and width W. |
| OH_AI_FORMAT_NHWC | NHWC format |
| OH_AI_FORMAT_NHWC | Tensor data is stored in the sequence of batch number N, height H, width W, and channel C. |
| OH_AI_FORMAT_NHWC4 | NHWC4 format |
| OH_AI_FORMAT_NHWC4 | Tensor data is stored in the sequence of batch number N, height H, width W, and channel C. The C axis is 4-byte aligned. |
| OH_AI_FORMAT_HWKC | HWKC format |
| OH_AI_FORMAT_HWKC | Tensor data is stored in the sequence of height H, width W, core count K, and channel C. |
| OH_AI_FORMAT_HWCK | HWCK format |
| OH_AI_FORMAT_HWCK | Tensor data is stored in the sequence of height H, width W, channel C, and core count K. |
| OH_AI_FORMAT_KCHW | KCHW format |
| OH_AI_FORMAT_KCHW | Tensor data is stored in the sequence of core count K, channel C, height H, and width W. |
| OH_AI_FORMAT_CKHW | CKHW format |
| OH_AI_FORMAT_CKHW | Tensor data is stored in the sequence of channel C, core count K, height H, and width W. |
| OH_AI_FORMAT_KHWC | KHWC format |
| OH_AI_FORMAT_KHWC | Tensor data is stored in the sequence of core count K, height H, width W, and channel C. |
| OH_AI_FORMAT_CHWK | CHWK format |
| OH_AI_FORMAT_CHWK | Tensor data is stored in the sequence of channel C, height H, width W, and core count K. |
| OH_AI_FORMAT_HW | HW format |
| OH_AI_FORMAT_HW | Tensor data is stored in the sequence of height H and width W. |
| OH_AI_FORMAT_HW4 | HW4 format |
| OH_AI_FORMAT_HW4 | Tensor data is stored in the sequence of height H and width W. The W axis is 4-byte aligned. |
| OH_AI_FORMAT_NC | NC format |
| OH_AI_FORMAT_NC | Tensor data is stored in the sequence of batch number N and channel C. |
| OH_AI_FORMAT_NC4 | NC4 format |
| OH_AI_FORMAT_NC4 | Tensor data is stored in the sequence of batch number N and channel C. The C axis is 4-byte aligned. |
| OH_AI_FORMAT_NC4HW4 | NC4HW4 format |
| OH_AI_FORMAT_NC4HW4 | Tensor data is stored in the sequence of batch number N, channel C, height H, and width W. The C axis and W axis are 4-byte aligned.|
| OH_AI_FORMAT_NCDHW | NCDHW format |
| OH_AI_FORMAT_NCDHW | Tensor data is stored in the sequence of batch number N, channel C, depth D, height H, and width W. |
| OH_AI_FORMAT_NWC | NWC format |
| OH_AI_FORMAT_NWC | Tensor data is stored in the sequence of batch number N, width W, and channel C. |
| OH_AI_FORMAT_NCW | NCW format |
| OH_AI_FORMAT_NCW | Tensor data is stored in the sequence of batch number N, channel C, and width W. |
### OH_AI_ModelType
### OH_AI_ModelType
...
@@ -371,13 +490,85 @@ Declares data formats supported by MSTensor.
...
@@ -371,13 +490,85 @@ Declares data formats supported by MSTensor.
```
```
enum OH_AI_ModelType
enum OH_AI_ModelType
```
```
**Description**<br>
**Description**
Defines model file types.
Defines model file types.
| Name | Description |
| Value | Description |
| -------- | -------- |
| ----------------------- | ------------------ |
| OH_AI_MODELTYPE_MINDIR | Model type: MindIR<br/>since 9 |
| OH_AI_MODELTYPE_MINDIR | Model type of MindIR. The extension of the model file name is **.ms**.|
| OH_AI_MODELTYPE_INVALID | Invalid model type<br/>since 9 |
| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance. |
| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
| core_num | Number of CPU cores. |
| core_num | Number of CPU cores. |
**Returns**
**Returns**
List of bound CPU cores.
Specifies the CPU core binding list. This list is managed by [OH_AI_ContextHandle](#oh_ai_contexthandle). The caller does not need to destroy it manually.
### OH_AI_ContextGetThreadAffinityMode()
### OH_AI_ContextGetThreadAffinityMode()
...
@@ -513,18 +715,20 @@ List of bound CPU cores.
...
@@ -513,18 +715,20 @@ List of bound CPU cores.
```
```
OH_AI_API int OH_AI_ContextGetThreadAffinityMode (const OH_AI_ContextHandle context)
OH_AI_API int OH_AI_ContextGetThreadAffinityMode (const OH_AI_ContextHandle context)
```
```
**Description**<br>
**Description**
Obtains the affinity mode for binding runtime threads to CPU cores.
Obtains the affinity mode for binding runtime threads to CPU cores.
| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance. |
| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
| is_parallel | Whether to enable parallelism between operators. The value **true** means to enable parallelism between operators, and the value **false** means the opposite. |
| is_parallel | Whether parallelism between operators is supported. The value **true** means that parallelism between operators is supported, and the value **false** means the opposite. |
### OH_AI_ContextSetThreadAffinityCoreList()
### OH_AI_ContextSetThreadAffinityCoreList()
...
@@ -570,16 +778,18 @@ Sets whether to enable parallelism between operators.
...
@@ -570,16 +778,18 @@ Sets whether to enable parallelism between operators.
Sets the list of CPU cores bound to a runtime thread.
Sets the list of CPU cores bound to a runtime thread.
For example, if **core_list** is set to **[2,6,8]**, threads run on the 2nd, 6th, and 8th CPU cores. If [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode) and [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist) are called for the same context object, the **core_list** parameter of [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist) takes effect, but the **mode** parameter of [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode) does not.
For example, if **core_list** is set to **[2,6,8]**, threads run on the 2nd, 6th, and 8th CPU cores. If [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode) and [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist) are called for the same context object, the **core_list** parameter of [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist) takes effect, but the **mode** parameter of [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode) does not.
| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance. |
| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
| core_list | List of bound CPU cores. |
| core_list | List of bound CPU cores. |
| core_num | Number of cores, which indicates the length of **core_list**. |
| core_num | Number of cores, which indicates the length of **core_list**. |
...
@@ -590,15 +800,17 @@ For example, if **core_list** is set to **[2,6,8]**, threads run on the 2nd, 6th
...
@@ -590,15 +800,17 @@ For example, if **core_list** is set to **[2,6,8]**, threads run on the 2nd, 6th
```
```
OH_AI_API void OH_AI_ContextSetThreadAffinityMode (OH_AI_ContextHandle context, int mode )
OH_AI_API void OH_AI_ContextSetThreadAffinityMode (OH_AI_ContextHandle context, int mode )
```
```
**Description**<br>
Sets the affinity mode for binding runtime threads to CPU cores, which are categorized into little cores and big cores depending on the CPU frequency.
**Parameters**
**Description**
Sets the affinity mode for binding runtime threads to CPU cores, which are classified into large, medium, and small cores based on the CPU frequency. You only need to bind the large or medium cores, but not small cores.
Searches for the NNRt device with the specified name and creates the NNRt device information based on the information about the first found NNRt device.
**Parameters**
| Name| Description |
| ---- | ---------------- |
| name | Name of the NNRt device.|
**Returns**
[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.
Searches for the NNRt device with the specified type and creates the NNRt device information based on the information about the first found NNRt device.
| desc | Double pointer to the array of the NNRt device descriptions. After the operation is complete, the content pointed to by **desc** is set to **NULL**.|
Destroys a device information object. Note: After the device information instance is added to the context, the caller does not need to destroy it manually.
| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
**Returns**
NPU frequency type. The value ranges from **0** to **4**. **1**: low power consumption; **2**: balanced; **3**: high performance; **4**: ultra-high performance
| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance. |
| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
**Returns**
**Returns**
Frequency type of the NPU. The value ranges from **0** to **4**. **1**: low power consumption; **2**: balanced; **3**: high performance; **4**: ultra-high performance
NNRt performance mode, which is specified by [OH_AI_PerformanceMode](#oh_ai_performancemode).
| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance. |
| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
| frequency | NPU frequency type. The value ranges from **0** to **4**. The default value is **3**. **1**: low power consumption; **2**: balanced; **3**: high performance; **4**: ultra-high performance |
| frequency | NPU frequency type. The value ranges from **0** to **4**. The default value is **3**. **1**: low power consumption; **2**: balanced; **3**: high performance; **4**: ultra-high performance|
Obtains the NNRt device name from the specified NNRt device description.
**Parameters**
| Name| Description |
| ---- | -------------------------------- |
| desc | Pointer to the NNRt device description.|
**Returns**
NNRt device name. The value is a pointer that points to a constant string, which is held by **desc**. The caller does not need to destroy it separately.
Loads and builds a MindSpore model from the memory buffer.
Loads and builds a MindSpore model from the memory buffer.
Note that the same {\@Link OH_AI_ContextHandle} object can only be passed to {\@Link OH_AI_ModelBuild} or {\@Link OH_AI_ModelBuildFromFile} once. If you call this function multiple times, make sure that you create multiple {\@Link OH_AI_ContextHandle} objects accordingly.
Note that the same {\@Link OH_AI_ContextHandle} object can only be passed to {\@Link OH_AI_ModelBuild} or {\@Link OH_AI_ModelBuildFromFile} once. If you call this function multiple times, make sure that you create multiple {\@Link OH_AI_ContextHandle} objects accordingly.
Loads and builds a MindSpore model from a model file.
Loads and builds a MindSpore model from a model file.
Note that the same {\@Link OH_AI_ContextHandle} object can only be passed to {\@Link OH_AI_ModelBuild} or {\@Link OH_AI_ModelBuildFromFile} once. If you call this function multiple times, make sure that you create multiple {\@Link OH_AI_ContextHandle} objects accordingly.
Note that the same {\@Link OH_AI_ContextHandle} object can only be passed to {\@Link OH_AI_ModelBuild} or {\@Link OH_AI_ModelBuildFromFile} once. If you call this function multiple times, make sure that you create multiple {\@Link OH_AI_ContextHandle} objects accordingly.
| inputs | Tensor array structure corresponding to the model input. |
| inputs | Tensor array structure corresponding to the model input. |
| shape_infos | Input shape array, which consists of tensor shapes arranged in the model input sequence. The model adjusts the tensor shapes in sequence. |
| shape_infos | Input shape information array, which consists of tensor shapes arranged in the model input sequence. The model adjusts the tensor shapes in sequence.|
| shape_info_num | Length of the input shape array. |
| shape_info_num | Length of the shape information array. |
**Returns**
**Returns**
Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **MSStatus::kMSStatusSuccess** indicates that the operation is successful.
Status code enumerated by [OH_AI_Status](#oh_ai_status-1). The value **MSStatus::kMSStatusSuccess** indicates that the operation is successful.
### OH_AI_TensorClone()
### OH_AI_TensorClone()
...
@@ -1038,18 +1619,20 @@ Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **MSStatus::k
...
@@ -1038,18 +1619,20 @@ Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **MSStatus::k
| [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) | Defines the pointer to the MindSpore context. |
| [OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) | Defines the pointer to the MindSpore context. |
| [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) | Defines the pointer to the MindSpore device information. |
| [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) | Defines the pointer to the MindSpore device information.|
### Functions
### Functions
| Name | Description |
| Name| Description|
| -------- | -------- |
| -------- | -------- |
| [OH_AI_ContextCreate](_mind_spore.md#oh_ai_contextcreate)() | Creates a context object. |
| [OH_AI_ContextCreate](_mind_spore.md#oh_ai_contextcreate)() | Creates a context object.|
| [OH_AI_ContextDestroy](_mind_spore.md#oh_ai_contextdestroy)([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle)\*context) | Destroys a context object. |
| [OH_AI_ContextDestroy](_mind_spore.md#oh_ai_contextdestroy)([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle)\*context) | Destroys a context object.|
| [OH_AI_ContextSetThreadNum](_mind_spore.md#oh_ai_contextsetthreadnum)([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, int32_t thread_num) | Sets the number of runtime threads. |
| [OH_AI_ContextSetThreadNum](_mind_spore.md#oh_ai_contextsetthreadnum)([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, int32_t thread_num) | Sets the number of runtime threads.|
| [OH_AI_ContextGetThreadNum](_mind_spore.md#oh_ai_contextgetthreadnum)(const[OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Obtains the number of threads. |
| [OH_AI_ContextGetThreadNum](_mind_spore.md#oh_ai_contextgetthreadnum)(const[OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Obtains the number of threads.|
| [OH_AI_ContextSetThreadAffinityMode](_mind_spore.md#oh_ai_contextsetthreadaffinitymode)([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, int mode) | Sets the affinity mode for binding runtime threads to CPU cores, which are categorized into little cores and big cores depending on the CPU frequency. |
| [OH_AI_ContextSetThreadAffinityMode](_mind_spore.md#oh_ai_contextsetthreadaffinitymode)([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, int mode) | Sets the affinity mode for binding runtime threads to CPU cores, which are classified into large, medium, and small cores based on the CPU frequency. You only need to bind the large or medium cores, but not small cores.|
| [OH_AI_ContextGetThreadAffinityMode](_mind_spore.md#oh_ai_contextgetthreadaffinitymode)(const[OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Obtains the affinity mode for binding runtime threads to CPU cores. |
| [OH_AI_ContextGetThreadAffinityMode](_mind_spore.md#oh_ai_contextgetthreadaffinitymode)(const[OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Obtains the affinity mode for binding runtime threads to CPU cores.|
| [OH_AI_ContextSetThreadAffinityCoreList](_mind_spore.md#oh_ai_contextsetthreadaffinitycorelist)([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, const int32_t \*core_list, size_t core_num) | Sets the list of CPU cores bound to a runtime thread. |
| [OH_AI_ContextSetThreadAffinityCoreList](_mind_spore.md#oh_ai_contextsetthreadaffinitycorelist)([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, const int32_t \*core_list, size_t core_num) | Sets the list of CPU cores bound to a runtime thread.|
| [OH_AI_ContextGetThreadAffinityCoreList](_mind_spore.md#oh_ai_contextgetthreadaffinitycorelist)(const[OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, size_t \*core_num) | Obtains the list of bound CPU cores. |
| [OH_AI_ContextGetThreadAffinityCoreList](_mind_spore.md#oh_ai_contextgetthreadaffinitycorelist)(const[OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, size_t \*core_num) | Obtains the list of bound CPU cores.|
| [OH_AI_ContextSetEnableParallel](_mind_spore.md#oh_ai_contextsetenableparallel)([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, bool is_parallel) | Sets whether to enable parallelism between operators. |
| [OH_AI_ContextSetEnableParallel](_mind_spore.md#oh_ai_contextsetenableparallel)([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, bool is_parallel) | Sets whether to enable parallelism between operators. The setting is ineffective because the feature of this API is not yet available.|
| [OH_AI_ContextGetEnableParallel](_mind_spore.md#oh_ai_contextgetenableparallel)(const[OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Checks whether parallelism between operators is supported. |
| [OH_AI_ContextGetEnableParallel](_mind_spore.md#oh_ai_contextgetenableparallel)(const[OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context) | Checks whether parallelism between operators is supported.|
| [OH_AI_ContextAddDeviceInfo](_mind_spore.md#oh_ai_contextadddeviceinfo)([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Adds information about a running device. |
| [OH_AI_ContextAddDeviceInfo](_mind_spore.md#oh_ai_contextadddeviceinfo)([OH_AI_ContextHandle](_mind_spore.md#oh_ai_contexthandle) context, [OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Attaches the custom device information to the inference context.|
| [OH_AI_DeviceInfoCreate](_mind_spore.md#oh_ai_deviceinfocreate)([OH_AI_DeviceType](_mind_spore.md#oh_ai_devicetype) device_type) | Creates a device information object. |
| [OH_AI_DeviceInfoCreate](_mind_spore.md#oh_ai_deviceinfocreate)([OH_AI_DeviceType](_mind_spore.md#oh_ai_devicetype) device_type) | Creates a device information object.|
| [OH_AI_DeviceInfoDestroy](_mind_spore.md#oh_ai_deviceinfodestroy)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle)\*device_info) | Destroys a device information instance. |
| [OH_AI_DeviceInfoDestroy](_mind_spore.md#oh_ai_deviceinfodestroy)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle)\*device_info) | Destroys a device information object. Note: After the device information instance is added to the context, the caller does not need to destroy it manually.|
| [OH_AI_DeviceInfoSetProvider](_mind_spore.md#oh_ai_deviceinfosetprovider)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, const char \*provider) | Sets the name of a provider. |
| [OH_AI_DeviceInfoGetProvider](_mind_spore.md#oh_ai_deviceinfogetprovider)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the provider name. |
| [OH_AI_DeviceInfoGetProvider](_mind_spore.md#oh_ai_deviceinfogetprovider)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the provider name.|
| [OH_AI_DeviceInfoSetProviderDevice](_mind_spore.md#oh_ai_deviceinfosetproviderdevice)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, const char \*device) | Sets the name of a provider device. |
| [OH_AI_DeviceInfoSetProviderDevice](_mind_spore.md#oh_ai_deviceinfosetproviderdevice)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, const char \*device) | Sets the name of a provider device.|
| [OH_AI_DeviceInfoGetProviderDevice](_mind_spore.md#oh_ai_deviceinfogetproviderdevice)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the name of a provider device. |
| [OH_AI_DeviceInfoGetProviderDevice](_mind_spore.md#oh_ai_deviceinfogetproviderdevice)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the name of a provider device.|
| [OH_AI_DeviceInfoGetDeviceType](_mind_spore.md#oh_ai_deviceinfogetdevicetype)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the type of a provider device. |
| [OH_AI_DeviceInfoGetDeviceType](_mind_spore.md#oh_ai_deviceinfogetdevicetype)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the device type.|
| [OH_AI_DeviceInfoSetEnableFP16](_mind_spore.md#oh_ai_deviceinfosetenablefp16)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, bool is_fp16) | Sets whether to enable float16 inference. This function is available only for CPU/GPU devices. |
| [OH_AI_DeviceInfoSetEnableFP16](_mind_spore.md#oh_ai_deviceinfosetenablefp16)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, bool is_fp16) | Sets whether to enable float16 inference. This function is available only for CPU and GPU devices.|
| [OH_AI_DeviceInfoGetEnableFP16](_mind_spore.md#oh_ai_deviceinfogetenablefp16)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Checks whether float16 inference is enabled. This function is available only for CPU/GPU devices. |
| [OH_AI_DeviceInfoGetEnableFP16](_mind_spore.md#oh_ai_deviceinfogetenablefp16)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Checks whether float16 inference is enabled. This function is available only for CPU and GPU devices.|
| [OH_AI_DeviceInfoSetFrequency](_mind_spore.md#oh_ai_deviceinfosetfrequency)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, int frequency) | Sets the NPU frequency type. This function is available only for NPU devices. |
| [OH_AI_DeviceInfoSetFrequency](_mind_spore.md#oh_ai_deviceinfosetfrequency)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, int frequency) | Sets the NPU frequency type. This function is available only for NPU devices.|
| [OH_AI_DeviceInfoGetFrequency](_mind_spore.md#oh_ai_deviceinfogetfrequency)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the NPU frequency type. This function is available only for NPU devices. |
| [OH_AI_DeviceInfoGetFrequency](_mind_spore.md#oh_ai_deviceinfogetfrequency)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the NPU frequency type. This function is available only for NPU devices.|
| [OH_AI_GetAllNNRTDeviceDescs](_mind_spore.md#oh_ai_getallnnrtdevicedescs)(size_t\*num) | Obtains the descriptions of all NNRt devices in the system.|
| [OH_AI_DestroyAllNNRTDeviceDescs](_mind_spore.md#oh_ai_destroyallnnrtdevicedescs)([NNRTDeviceDesc](_mind_spore.md#nnrtdevicedesc)\*\*desc) | Destroys the array of NNRT descriptions obtained by [OH_AI_GetAllNNRTDeviceDescs](_mind_spore.md#oh_ai_getallnnrtdevicedescs).|
| [OH_AI_GetDeviceIdFromNNRTDeviceDesc](_mind_spore.md#oh_ai_getdeviceidfromnnrtdevicedesc)(const[NNRTDeviceDesc](_mind_spore.md#nnrtdevicedesc) \*desc) | Obtains the NNRt device ID from the specified NNRt device description. Note that this ID is valid only for NNRt devices.|
| [OH_AI_GetNameFromNNRTDeviceDesc](_mind_spore.md#oh_ai_getnamefromnnrtdevicedesc)(const[NNRTDeviceDesc](_mind_spore.md#nnrtdevicedesc) \*desc) | Obtains the NNRt device name from the specified NNRt device description.|
| [OH_AI_GetTypeFromNNRTDeviceDesc](_mind_spore.md#oh_ai_gettypefromnnrtdevicedesc)(const[NNRTDeviceDesc](_mind_spore.md#nnrtdevicedesc) \*desc) | Obtains the NNRt device type from the specified NNRt device description.|
| [OH_AI_CreateNNRTDeviceInfoByName](_mind_spore.md#oh_ai_creatennrtdeviceinfobyname)(const char \*name) | Searches for the NNRt device with the specified name and creates the NNRt device information based on the information about the first found NNRt device.|
| [OH_AI_CreateNNRTDeviceInfoByType](_mind_spore.md#oh_ai_creatennrtdeviceinfobytype)([OH_AI_NNRTDeviceType](_mind_spore.md#oh_ai_nnrtdevicetype) type) | Searches for the NNRt device with the specified type and creates the NNRt device information based on the information about the first found NNRt device.|
| [OH_AI_DeviceInfoSetDeviceId](_mind_spore.md#oh_ai_deviceinfosetdeviceid)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, size_t device_id) | Sets the ID of an NNRt device. This API is available only for NNRt devices.|
| [OH_AI_DeviceInfoGetDeviceId](_mind_spore.md#oh_ai_deviceinfogetdeviceid)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the ID of an NNRt device. This API is available only for NNRt devices.|
| [OH_AI_DeviceInfoSetPerformanceMode](_mind_spore.md#oh_ai_deviceinfosetperformancemode)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, [OH_AI_PerformanceMode](_mind_spore.md#oh_ai_performancemode) mode) | Sets the performance mode of an NNRt device. This API is available only for NNRt devices.|
| [OH_AI_DeviceInfoGetPerformanceMode](_mind_spore.md#oh_ai_deviceinfogetperformancemode)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the performance mode of an NNRt device. This API is available only for NNRt devices.|
| [OH_AI_DeviceInfoSetPriority](_mind_spore.md#oh_ai_deviceinfosetpriority)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, [OH_AI_Priority](_mind_spore.md#oh_ai_priority) priority) | Sets the priority of an NNRT task. This API is available only for NNRt devices.|
| [OH_AI_DeviceInfoGetPriority](_mind_spore.md#oh_ai_deviceinfogetpriority)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the priority of an NNRT task. This API is available only for NNRt devices.|
@@ -26,7 +26,7 @@ For details about the requirements on the Linux environment, see [Quick Start](.
...
@@ -26,7 +26,7 @@ For details about the requirements on the Linux environment, see [Quick Start](.
The following uses [DAYU200](https://gitee.com/openharmony/vendor_hihope/tree/master/rk3568) as an example to illustrate thermal log customization.
The following uses [DAYU200](https://gitee.com/openharmony/vendor_hihope/tree/master/rk3568) as an example to illustrate thermal log customization.
1. Create the `thermal` folder in the product directory [/vendor/hihope/rk3568](https://gitee.com/openharmony/vendor_hihope/tree/master/rk3568).
1. Create the thermal folder in the product directory [/vendor/hihope/rk3568](https://gitee.com/openharmony/vendor_hihope/tree/master/rk3568).
2. Create a target folder by referring to the [default thermal log configuration folder](https://gitee.com/openharmony/drivers_peripheral/tree/master/thermal/interfaces/hdi_service/profile), and install it in `//vendor/hihope/rk3568/thermal`. The content is as follows:
2. Create a target folder by referring to the [default thermal log configuration folder](https://gitee.com/openharmony/drivers_peripheral/tree/master/thermal/interfaces/hdi_service/profile), and install it in `//vendor/hihope/rk3568/thermal`. The content is as follows:
...
@@ -50,8 +50,6 @@ The following uses [DAYU200](https://gitee.com/openharmony/vendor_hihope/tree/ma
...
@@ -50,8 +50,6 @@ The following uses [DAYU200](https://gitee.com/openharmony/vendor_hihope/tree/ma
| Configuration Item| Description| Data Type| Value Range|
| Configuration Item| Description| Data Type| Value Range|
| -------- | -------- | -------- | -------- |
| -------- | -------- | -------- | -------- |
| interval | Interval for recording temperature tracing logs, in ms.| int | >0 |
| width | Width of the temperature tracing log, in characters.| int | >0 |
| outpath | Path for storing temperature tracing logs.| string | N/A|
| outpath | Path for storing temperature tracing logs.| string | N/A|
**Table 2** Description of the node configuration
**Table 2** Description of the node configuration
...
@@ -63,7 +61,7 @@ The following uses [DAYU200](https://gitee.com/openharmony/vendor_hihope/tree/ma
...
@@ -63,7 +61,7 @@ The following uses [DAYU200](https://gitee.com/openharmony/vendor_hihope/tree/ma
| value | path | Path for obtaining the thermal zone temperature.|
| value | path | Path for obtaining the thermal zone temperature.|
6. Write the `BUILD.gn` file by referring to the [BUILD.gn](https://gitee.com/openharmony/drivers_peripheral/blob/master/thermal/interfaces/hdi_service/profile/BUILD.gn) file in the default thermal log configuration folder to pack the `thermal_hdi_config.xml` file to the `//vendor/etc/thermal_config/hdf` directory. The configuration is as follows:
6. Write the `BUILD.gn` file by referring to the [BUILD.gn](https://gitee.com/openharmony/drivers_peripheral/blob/master/thermal/interfaces/hdi_service/profile/BUILD.gn) file in the default thermal log configuration folder to pack the thermal_hdi_config.xml file to the `//vendor/etc/thermal_config/hdf` directory. The configuration is as follows:
```shell
```shell
import("//build/ohos.gni")
import("//build/ohos.gni")
...
@@ -97,7 +95,7 @@ The following uses [DAYU200](https://gitee.com/openharmony/vendor_hihope/tree/ma
...
@@ -97,7 +95,7 @@ The following uses [DAYU200](https://gitee.com/openharmony/vendor_hihope/tree/ma
ohos_prebuilt_etc("thermal_hdf_config") {
ohos_prebuilt_etc("thermal_hdf_config") {
source = "thermal_hdi_config.xml"
source = "thermal_hdi_config.xml"
relative_install_dir = "thermal_config/hdf"
relative_install_dir = "thermal_config/hdf"
install_images = [ chipset_base_dir ] # Required configuration for installing the thermal_hdi_config.xml file in the vendor directory.
install_images = [ chipset_base_dir ] # Required configuration for installing the thermal_service_config.xml file in the vendor directory.
part_name = "product_rk3568" # Set part_name to product_rk3568 for subsequent build. You can change it as required.
part_name = "product_rk3568" # Set part_name to product_rk3568 for subsequent build. You can change it as required.
}
}
```
```
...
@@ -151,7 +149,7 @@ The following uses [DAYU200](https://gitee.com/openharmony/vendor_hihope/tree/ma
...
@@ -151,7 +149,7 @@ The following uses [DAYU200](https://gitee.com/openharmony/vendor_hihope/tree/ma
"subsystem": "product_hihope"
"subsystem": "product_hihope"
}
}
```
```
In the preceding code, `//vendor/hihope/rk3568/thermal/` is the folder path, `profile` and `etc` are folder names, and `thermal_hdf_config` and `param_files` are the build targets.
In the preceding code, //vendor/hihope/rk3568/thermal/ is the folder path, profile and etc are folder names, and thermal_hdf_config and param_files are the build targets.
9. Build the customized version by referring to [Quick Start](../quick-start/quickstart-overview.md).
9. Build the customized version by referring to [Quick Start](../quick-start/quickstart-overview.md).
| ArkUI | UX changed | [The hover effect of the \<Button> component is changed from scale-up by 100% to 105% to overlay of 0% to 5% opacity.](changelogs-arkui.md)|
| ArkUI | UX changed | [The hover effect of the \<Button> component is changed from scale-up by 100% to 105% to overlay of 0% to 5% opacity.](changelogs-arkui.md)|
| ArkUI | UX changed | [The alignment mode of multi-line text in toasts is changed from center-aligned to left-aligned.](changelogs-arkui.md)|
| ArkUI | UX changed | [The alignment mode of multi-line text in toasts is changed from center-aligned to left-aligned.](changelogs-arkui.md)|
| Bundle management subsystem | Mechanism changed | [The HAP is no longer decompressed during HAP installation.](changelogs-bundlemanager.md)|
| Bundle management subsystem | Mechanism changed | [The HAP is no longer decompressed during HAP installation.](changelogs-bundlemanager.md)|
| Globalization | API added | [The getStringSync and getStringByNameSync APIs are added.](changelogs-global.md)|
| Globalization | Behavior changed | [The meaning of the return value for the API used to obtain the rawfile descriptor is changed.](changelogs-global.md)|
| Web | Input parameter added | [The input parameter type Resource is added for the setResponseData API.](changelogs-web.md) |
| Web | Input parameter added | [The input parameter type Resource is added for the setResponseData API.](changelogs-web.md) |
| Resource scheduler subsystem | Behavior changed | [The reminder agent allows you to customize buttons for system applications. Clicking a custom button will redirect you to the specified application page.](changelogs-resourceschedule.md)|
| Resource scheduler subsystem | Behavior changed | [The reminder agent allows you to customize buttons for system applications. Clicking a custom button will redirect you to the specified application page.](changelogs-resourceschedule.md)|