@@ -23,7 +23,7 @@ Process crash detection is implemented based on the Linux signal mechanism. Curr
## Crash Log Collection
Process crash log is the fault log managed together with the app freeze and JS application crash logs by the FaultLogger module. You can collect process crash logs in any of the following ways:
Process crash log is a type of fault logs managed together with the app freeze and JS application crash logs by the FaultLogger module. You can collect process crash logs in any of the following ways:
1. Determine the faulty module and fault type based on fault logs.
Generally, you can identify the faulty module based on the crash process name and identify the crash cause based on the signal. Besides, you can restore the function call chain of the crash stack based on the method name in the stack.
In the example, **SIGSEGV** is thrown by the Linux kernel because of access to an invalid memory address. The problem occurs in the **TriggerSegmentFaultException** function.
In most scenarios, a crash is caused by the top layer of the crash stack, such as null pointer access and proactive program abort.
- Determine the faulty module and fault type based on fault logs.
Generally, you can identify the faulty module based on the crash process name and identify the crash cause based on the signal. Besides, you can restore the function call chain of the crash stack based on the method name in the stack.\
In the example, **SIGSEGV** is thrown by the Linux kernel because of access to an invalid memory address. The problem occurs in the **TriggerSegmentFaultException** function.\
In most scenarios, a crash is caused by the top layer of the crash stack, such as null pointer access and proactive program abort.\
If the cause cannot be located through the call stack, you need to check for other faults, for example, memory corruption or stack overflow.
2. Use the addr2line tool of Linux to parse the code line number to restore the call stack at the time of process crash.
- Use the addr2line tool of Linux to parse the code line number to restore the call stack at the time of process crash.
When using the addr2line tool to parse the code line number of the crash stack, make sure that binary files with debugging information is used. Generally, such files are generated during version build or application build.
In this example, the crash is caused by assignment of a value to an unwritable area. It is in code line 57 in the **dfx_crasher.c** file. You can modify it to avoid the crash.
In this example, the crash is caused by assignment of a value to an unwritable area. It is in code line 57 in the **dfx_crasher.c** file. You can modify it to avoid the crash.\
If the obtained code line number is seemingly incorrect, you can fine-tune the address (for example, subtract the address by 1) or disable some compilation optimization items. It is known that the obtained code line number may be incorrect when Link Time Optimization (LTO) is enabled.
The **dragInteraction** module provides the APIs to enable and disable listening for dragging status changes.
> **NOTE**
>
> - The initial APIs of this module are supported since API version 10. Newly added APIs will be marked with a superscript to indicate their earliest API version.
>
> - The APIs provided by this module are system APIs.
| type | string | Yes | Event type. This field has a fixed value of **drag**.|
| callback | Callback<[DragState](#dragstate)> | No | Callback to be unregistered. If this parameter is not specified, all callbacks registered by the current application will be unregistered.|
| [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) | Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length.|
| [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) | Dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**. |
| [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) | Defines dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**. |
| [OH_AI_CallBackParam](_o_h___a_i___call_back_param.md) | Defines the operator information passed in a callback. |
...
...
@@ -109,18 +109,19 @@ Provides APIs related to MindSpore Lite model inference.
| [OH_AI_DeviceInfoSetFrequency](#oh_ai_deviceinfosetfrequency)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, int frequency) | Sets the NPU frequency type. This function is available only for NPU devices. |
| [OH_AI_DeviceInfoGetFrequency](#oh_ai_deviceinfogetfrequency)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the NPU frequency type. This function is available only for NPU devices. |
| [OH_AI_GetAllNNRTDeviceDescs](#oh_ai_getallnnrtdevicedescs)(size_t\*num) | Obtains the descriptions of all NNRt devices in the system. |
| [OH_AI_DestroyAllNNRTDeviceDescs](#oh_ai_destroyallnnrtdevicedescs)([NNRTDeviceDesc](#nnrtdevicedesc)\*\*desc) | Destroys the array of NNRT descriptions obtained by [OH_AI_GetAllNNRTDeviceDescs](#oh_ai_getallnnrtdevicedescs).|
| [OH_AI_GetElementOfNNRTDeviceDescs](#oh_ai_getelementofnnrtdevicedescs)(NNRTDeviceDesc\*descs, size_t index) | Obtains the pointer to an element in the NNRt device description array.|
| [OH_AI_DestroyAllNNRTDeviceDescs](#oh_ai_destroyallnnrtdevicedescs)([NNRTDeviceDesc](#nnrtdevicedesc)\*\*desc) | Destroys the NNRt device description array obtained by [OH_AI_GetAllNNRTDeviceDescs](#oh_ai_getallnnrtdevicedescs).|
| [OH_AI_GetDeviceIdFromNNRTDeviceDesc](#oh_ai_getdeviceidfromnnrtdevicedesc)(const[NNRTDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device ID from the specified NNRt device description. Note that this ID is valid only for NNRt devices.|
| [OH_AI_GetNameFromNNRTDeviceDesc](#oh_ai_getnamefromnnrtdevicedesc)(const[NNRTDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device name from the specified NNRt device description. |
| [OH_AI_GetTypeFromNNRTDeviceDesc](#oh_ai_gettypefromnnrtdevicedesc)(const[NNRTDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device type from the specified NNRt device description. |
| [OH_AI_CreateNNRTDeviceInfoByName](#oh_ai_creatennrtdeviceinfobyname)(const char \*name) | Searches for the NNRt device with the specified name and creates the NNRt device information based on the information about the first found NNRt device.|
| [OH_AI_CreateNNRTDeviceInfoByType](#oh_ai_creatennrtdeviceinfobytype)([OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype) type) | Searches for the NNRt device with the specified type and creates the NNRt device information based on the information about the first found NNRt device.|
| [OH_AI_DeviceInfoSetDeviceId](#oh_ai_deviceinfosetdeviceid)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, size_t device_id) | Sets the ID of an NNRt device. This API is available only for NNRt devices. |
| [OH_AI_DeviceInfoGetDeviceId](#oh_ai_deviceinfogetdeviceid)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the ID of an NNRt device. This API is available only for NNRt devices. |
| [OH_AI_DeviceInfoSetPerformanceMode](#oh_ai_deviceinfosetperformancemode)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, [OH_AI_PerformanceMode](#oh_ai_performancemode) mode) | Sets the performance mode of an NNRt device. This API is available only for NNRt devices. |
| [OH_AI_DeviceInfoGetPerformanceMode](#oh_ai_deviceinfogetperformancemode)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the performance mode of an NNRt device. This API is available only for NNRt devices. |
| [OH_AI_DeviceInfoSetPriority](#oh_ai_deviceinfosetpriority)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, [OH_AI_Priority](#oh_ai_priority) priority) | Sets the priority of an NNRT task. This API is available only for NNRt devices. |
| [OH_AI_DeviceInfoGetPriority](#oh_ai_deviceinfogetpriority)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the priority of an NNRT task. This API is available only for NNRt devices. |
| [OH_AI_DeviceInfoSetDeviceId](#oh_ai_deviceinfosetdeviceid)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, size_t device_id) | Sets the ID of an NNRt device. This function is available only for NNRt devices. |
| [OH_AI_DeviceInfoGetDeviceId](#oh_ai_deviceinfogetdeviceid)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the ID of an NNRt device. This function is available only for NNRt devices. |
| [OH_AI_DeviceInfoSetPerformanceMode](#oh_ai_deviceinfosetperformancemode)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, [OH_AI_PerformanceMode](#oh_ai_performancemode) mode) | Sets the performance mode of an NNRt device. This function is available only for NNRt devices. |
| [OH_AI_DeviceInfoGetPerformanceMode](#oh_ai_deviceinfogetperformancemode)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the performance mode of an NNRt device. This function is available only for NNRt devices. |
| [OH_AI_DeviceInfoSetPriority](#oh_ai_deviceinfosetpriority)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, [OH_AI_Priority](#oh_ai_priority) priority) | Sets the priority of an NNRt task. This function is available only for NNRt devices. |
| [OH_AI_DeviceInfoGetPriority](#oh_ai_deviceinfogetpriority)(const[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the priority of an NNRt task. This function is available only for NNRt devices. |
| [OH_AI_ModelCreate](#oh_ai_modelcreate)() | Creates a model object. |
| [OH_AI_ModelDestroy](#oh_ai_modeldestroy)([OH_AI_ModelHandle](#oh_ai_modelhandle)\*model) | Destroys a model object. |
| [OH_AI_ModelBuild](#oh_ai_modelbuild)([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const void \*model_data, size_t data_size, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context) | Loads and builds a MindSpore model from the memory buffer. |
...
...
@@ -131,9 +132,10 @@ Provides APIs related to MindSpore Lite model inference.
| [OH_AI_ModelGetOutputs](#oh_ai_modelgetoutputs)(const[OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains the output tensor array structure of a model. |
| [OH_AI_ModelGetInputByTensorName](#oh_ai_modelgetinputbytensorname)(const[OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*tensor_name) | Obtains the input tensor of a model by tensor name. |
| [OH_AI_ModelGetOutputByTensorName](#oh_ai_modelgetoutputbytensorname)(const[OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*tensor_name) | Obtains the output tensor of a model by tensor name. |
| [OH_AI_DeviceInfoAddExtension](#oh_ai_deviceinfoaddextension)([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, const char \*name, const char \*value, size_t value_size) | Adds extended configuration in the form of key/value pairs to the device information. This function is available only for NNRt device information.|
| [OH_AI_TensorGetName](#oh_ai_tensorgetname)(const[OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the tensor name. |
| [OH_AI_TensorSetDataType](#oh_ai_tensorsetdatatype)([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, [OH_AI_DataType](#oh_ai_datatype) type) | Sets the data type of a tensor. |
...
...
@@ -147,7 +149,7 @@ Provides APIs related to MindSpore Lite model inference.
| [OH_AI_TensorGetMutableData](#oh_ai_tensorgetmutabledata)(const[OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the pointer to variable tensor data. If the data is empty, memory will be allocated. |
| [OH_AI_TensorGetElementNum](#oh_ai_tensorgetelementnum)(const[OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the number of tensor elements. |
| [OH_AI_TensorGetDataSize](#oh_ai_tensorgetdatasize)(const[OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the number of bytes of the tensor data. |
| [OH_AI_TensorSetUserData](#oh_ai_tensorsetuserdata)([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, void \*data, size_t data_size) | Sets the tensor as the user data. This function allows you to reuse user data as the model input, which helps to reduce data copy by one time.|
This function allows you to reuse user data as the model input, which helps to reduce data copy by one time.
> **NOTE**<br>The user data is type of external data for the tensor and is not automatically released when the tensor is destroyed. The caller needs to release the data separately. In addition, the caller must ensure that the user data is valid during use of the tensor.
**Parameters**
| Name| Description|
| -------- | -------- |
| tensor | Handle of the tensor object.|
| data | Start address of user data.|
| data_size | Length of the user data length.|
**Returns**
Execution status code. The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
Adds extended configuration in the form of key/value pairs to the device information. This function is available only for NNRt device information.
>**NOTE**<br>Key value pairs currently supported include {"CachePath": "YourCachePath"}, {"CacheVersion": "YourCacheVersion"}, and {"QuantParam": "YourQuantConfig"}. Replace the actual value as required.
**Parameters**
| Name| Description|
| -------- | -------- |
| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
| name | Key in an extended key/value pair. The value is a C string.|
| value | Start address of the value in an extended key/value pair.|
| value_size | Length of the value in an extended key/value pair.|
**Returns**
Status code enumerated by **OH_AI_Status**. The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
**Since**
10
### OH_AI_GetElementOfNNRTDeviceDescs()
```
OH_AI_API NNRTDeviceDesc* OH_AI_GetElementOfNNRTDeviceDescs (NNRTDeviceDesc * descs, size_t index )
```
**Description**
Obtains the pointer to an element in the NNRt device description array.
**Parameters**
| Name| Description|
| -------- | -------- |
| descs | NNRt device description array.|
| index | Index of an array element.|
**Returns**
Pointer to an element in the NNRt device description array.
| [OH_AI_DeviceInfoSetFrequency](_mind_spore.md#oh_ai_deviceinfosetfrequency)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, int frequency) | Sets the NPU frequency type. This function is available only for NPU devices.|
| [OH_AI_DeviceInfoGetFrequency](_mind_spore.md#oh_ai_deviceinfogetfrequency)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the NPU frequency type. This function is available only for NPU devices.|
| [OH_AI_GetAllNNRTDeviceDescs](_mind_spore.md#oh_ai_getallnnrtdevicedescs)(size_t\*num) | Obtains the descriptions of all NNRt devices in the system.|
| [OH_AI_DestroyAllNNRTDeviceDescs](_mind_spore.md#oh_ai_destroyallnnrtdevicedescs)([NNRTDeviceDesc](_mind_spore.md#nnrtdevicedesc)\*\*desc) | Destroys the array of NNRT descriptions obtained by [OH_AI_GetAllNNRTDeviceDescs](_mind_spore.md#oh_ai_getallnnrtdevicedescs).|
| [OH_AI_GetElementOfNNRTDeviceDescs](_mind_spore.md#oh_ai_getelementofnnrtdevicedescs)(NNRTDeviceDesc\*descs, size_t index) | Obtains the pointer to an element in the NNRt device description array.|
| [OH_AI_DestroyAllNNRTDeviceDescs](_mind_spore.md#oh_ai_destroyallnnrtdevicedescs)([NNRTDeviceDesc](_mind_spore.md#nnrtdevicedesc)\*\*desc) | Destroys the NNRt device description array obtained by [OH_AI_GetAllNNRTDeviceDescs](_mind_spore.md#oh_ai_getallnnrtdevicedescs).|
| [OH_AI_GetDeviceIdFromNNRTDeviceDesc](_mind_spore.md#oh_ai_getdeviceidfromnnrtdevicedesc)(const[NNRTDeviceDesc](_mind_spore.md#nnrtdevicedesc) \*desc) | Obtains the NNRt device ID from the specified NNRt device description. Note that this ID is valid only for NNRt devices.|
| [OH_AI_GetNameFromNNRTDeviceDesc](_mind_spore.md#oh_ai_getnamefromnnrtdevicedesc)(const[NNRTDeviceDesc](_mind_spore.md#nnrtdevicedesc) \*desc) | Obtains the NNRt device name from the specified NNRt device description.|
| [OH_AI_GetTypeFromNNRTDeviceDesc](_mind_spore.md#oh_ai_gettypefromnnrtdevicedesc)(const[NNRTDeviceDesc](_mind_spore.md#nnrtdevicedesc) \*desc) | Obtains the NNRt device type from the specified NNRt device description.|
| [OH_AI_CreateNNRTDeviceInfoByName](_mind_spore.md#oh_ai_creatennrtdeviceinfobyname)(const char \*name) | Searches for the NNRt device with the specified name and creates the NNRt device information based on the information about the first found NNRt device.|
| [OH_AI_CreateNNRTDeviceInfoByType](_mind_spore.md#oh_ai_creatennrtdeviceinfobytype)([OH_AI_NNRTDeviceType](_mind_spore.md#oh_ai_nnrtdevicetype) type) | Searches for the NNRt device with the specified type and creates the NNRt device information based on the information about the first found NNRt device.|
| [OH_AI_DeviceInfoSetDeviceId](_mind_spore.md#oh_ai_deviceinfosetdeviceid)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, size_t device_id) | Sets the ID of an NNRt device. This API is available only for NNRt devices.|
| [OH_AI_DeviceInfoGetDeviceId](_mind_spore.md#oh_ai_deviceinfogetdeviceid)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the ID of an NNRt device. This API is available only for NNRt devices.|
| [OH_AI_DeviceInfoSetPerformanceMode](_mind_spore.md#oh_ai_deviceinfosetperformancemode)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, [OH_AI_PerformanceMode](_mind_spore.md#oh_ai_performancemode) mode) | Sets the performance mode of an NNRt device. This API is available only for NNRt devices.|
| [OH_AI_DeviceInfoGetPerformanceMode](_mind_spore.md#oh_ai_deviceinfogetperformancemode)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the performance mode of an NNRt device. This API is available only for NNRt devices.|
| [OH_AI_DeviceInfoSetPriority](_mind_spore.md#oh_ai_deviceinfosetpriority)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, [OH_AI_Priority](_mind_spore.md#oh_ai_priority) priority) | Sets the priority of an NNRT task. This API is available only for NNRt devices.|
| [OH_AI_DeviceInfoGetPriority](_mind_spore.md#oh_ai_deviceinfogetpriority)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the priority of an NNRT task. This API is available only for NNRt devices.|
| [OH_AI_DeviceInfoSetDeviceId](_mind_spore.md#oh_ai_deviceinfosetdeviceid)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, size_t device_id) | Sets the ID of an NNRt device. This function is available only for NNRt devices.|
| [OH_AI_DeviceInfoGetDeviceId](_mind_spore.md#oh_ai_deviceinfogetdeviceid)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the ID of an NNRt device. This function is available only for NNRt devices.|
| [OH_AI_DeviceInfoSetPerformanceMode](_mind_spore.md#oh_ai_deviceinfosetperformancemode)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, [OH_AI_PerformanceMode](_mind_spore.md#oh_ai_performancemode) mode) | Sets the performance mode of an NNRt device. This function is available only for NNRt devices.|
| [OH_AI_DeviceInfoGetPerformanceMode](_mind_spore.md#oh_ai_deviceinfogetperformancemode)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the performance mode of an NNRt device. This function is available only for NNRt devices.|
| [OH_AI_DeviceInfoSetPriority](_mind_spore.md#oh_ai_deviceinfosetpriority)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, [OH_AI_Priority](_mind_spore.md#oh_ai_priority) priority) | Sets the priority of an NNRt task. This function is available only for NNRt devices.|
| [OH_AI_DeviceInfoGetPriority](_mind_spore.md#oh_ai_deviceinfogetpriority)(const[OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info) | Obtains the priority of an NNRt task. This function is available only for NNRt devices.|
| [OH_AI_DeviceInfoAddExtension](_mind_spore.md#oh_ai_deviceinfoaddextension)([OH_AI_DeviceInfoHandle](_mind_spore.md#oh_ai_deviceinfohandle) device_info, const char \*name, const char \*value, size_t value_size) | Adds extended configuration in the form of key/value pairs to the device information. This function is available only for NNRt device information.|
| [OH_AI_TensorDestroy](_mind_spore.md#oh_ai_tensordestroy)([OH_AI_TensorHandle](_mind_spore.md#oh_ai_tensorhandle)\*tensor) | Destroys a tensor object. |
| [OH_AI_TensorClone](_mind_spore.md#oh_ai_tensorclone)([OH_AI_TensorHandle](_mind_spore.md#oh_ai_tensorhandle) tensor) | Clones a tensor. |
| [OH_AI_TensorSetName](_mind_spore.md#oh_ai_tensorsetname)([OH_AI_TensorHandle](_mind_spore.md#oh_ai_tensorhandle) tensor, const char \*name) | Sets the name of a tensor. |
| [OH_AI_TensorGetName](_mind_spore.md#oh_ai_tensorgetname)(const[OH_AI_TensorHandle](_mind_spore.md#oh_ai_tensorhandle) tensor) | Obtains the name of a tensor. |
| [OH_AI_TensorSetDataType](_mind_spore.md#oh_ai_tensorsetdatatype)([OH_AI_TensorHandle](_mind_spore.md#oh_ai_tensorhandle) tensor, [OH_AI_DataType](_mind_spore.md#oh_ai_datatype) type) | Sets the data type of a tensor. |
| [OH_AI_TensorGetDataType](_mind_spore.md#oh_ai_tensorgetdatatype)(const[OH_AI_TensorHandle](_mind_spore.md#oh_ai_tensorhandle) tensor) | Obtains the data type of a tensor. |
| [OH_AI_TensorSetShape](_mind_spore.md#oh_ai_tensorsetshape)([OH_AI_TensorHandle](_mind_spore.md#oh_ai_tensorhandle) tensor, const int64_t \*shape, size_t shape_num) | Sets the shape of a tensor. |
| [OH_AI_TensorGetShape](_mind_spore.md#oh_ai_tensorgetshape)(const[OH_AI_TensorHandle](_mind_spore.md#oh_ai_tensorhandle) tensor, size_t \*shape_num) | Obtains the shape of a tensor. |
| [OH_AI_TensorSetFormat](_mind_spore.md#oh_ai_tensorsetformat)([OH_AI_TensorHandle](_mind_spore.md#oh_ai_tensorhandle) tensor, [OH_AI_Format](_mind_spore.md#oh_ai_format) format) | Sets the tensor data format. |
| [OH_AI_TensorGetFormat](_mind_spore.md#oh_ai_tensorgetformat)(const[OH_AI_TensorHandle](_mind_spore.md#oh_ai_tensorhandle) tensor) | Obtains the tensor data format. |
| [OH_AI_TensorGetData](_mind_spore.md#oh_ai_tensorgetdata)(const[OH_AI_TensorHandle](_mind_spore.md#oh_ai_tensorhandle) tensor) | Obtains the pointer to tensor data. |
| [OH_AI_TensorGetMutableData](_mind_spore.md#oh_ai_tensorgetmutabledata)(const[OH_AI_TensorHandle](_mind_spore.md#oh_ai_tensorhandle) tensor) | Obtains the pointer to variable tensor data. If the data is empty, memory will be allocated. |
| [OH_AI_TensorGetElementNum](_mind_spore.md#oh_ai_tensorgetelementnum)(const[OH_AI_TensorHandle](_mind_spore.md#oh_ai_tensorhandle) tensor) | Obtains the number of tensor elements. |
| [OH_AI_TensorGetDataSize](_mind_spore.md#oh_ai_tensorgetdatasize)(const[OH_AI_TensorHandle](_mind_spore.md#oh_ai_tensorhandle) tensor) | Obtains the number of bytes of the tensor data. |
| [OH_AI_TensorSetFormat](_mind_spore.md#oh_ai_tensorsetformat)(OH_AI_TensorHandle tensor, OH_AI_Format format) | Sets the tensor data format.|
| [OH_AI_TensorGetFormat](_mind_spore.md#oh_ai_tensorgetformat)(const OH_AI_TensorHandle tensor) | Obtains the tensor data format.|
| [OH_AI_TensorSetData](_mind_spore.md#oh_ai_tensorsetdata)(OH_AI_TensorHandle tensor, void \*data) | Sets the tensor data.|
| [OH_AI_TensorGetData](_mind_spore.md#oh_ai_tensorgetdata)(const OH_AI_TensorHandle tensor) | Obtains the pointer to tensor data.|
| [OH_AI_TensorGetMutableData](_mind_spore.md#oh_ai_tensorgetmutabledata)(const OH_AI_TensorHandle tensor) | Obtains the pointer to variable tensor data. If the data is empty, memory will be allocated.|
| [OH_AI_TensorGetElementNum](_mind_spore.md#oh_ai_tensorgetelementnum)(const OH_AI_TensorHandle tensor) | Obtains the number of tensor elements.|
| [OH_AI_TensorGetDataSize](_mind_spore.md#oh_ai_tensorgetdatasize)(const OH_AI_TensorHandle tensor) | Obtains the number of bytes of the tensor data.|
| [OH_AI_TensorSetUserData](_mind_spore.md#oh_ai_tensorsetuserdata)([OH_AI_TensorHandle](_mind_spore.md#oh_ai_tensorhandle) tensor, void \*data, size_t data_size) | Sets the tensor as the user data. This function allows you to reuse user data as the model input, which helps to reduce data copy by one time.|
@@ -110,11 +110,11 @@ You can use either the ArtTS or native API to determine whether an API is availa
- Method 2: Import a module using the **import** API. If the current device does not support the module, the import result is **undefined**. Before using an API, you must make sure the API is available.
```ts
import geolocation from '@ohos.geolocation';
import geolocationManager from '@ohos.geoLocationManager';
| **Source Code of the Latest Release**| **Version**| **Site**| **SHA-256 Checksum**| **Software Package Size**|
| Full code base (for mini, small, and standard systems) | 4.0 Beta1 | [Download](https://repo.huaweicloud.com/openharmony/os/4.0-Beta1/code-v4.0-Beta1.tar.gz)| [Download](https://repo.huaweicloud.com/openharmony/os/4.0-Beta1/code-v4.0-Beta1.tar.gz.sha256)| 26.2 GB |
| Full code base (for mini, small, and standard systems) | 4.0 Beta2 | [Download](https://repo.huaweicloud.com/openharmony/os/4.0-Beta2/code-v4.0-Beta2.tar.gz)| [Download](https://repo.huaweicloud.com/openharmony/os/4.0-Beta2/code-v4.0-Beta2.tar.gz.sha256)| 27.7 GB |
Deprecated **sendMessage** and replaced it with **sendShortMessage**.
**Change Impact**
**sendMessage** is deprecated since API version 10. **sendShortMessage** should be used to replace **sendMessage** in application development. The API function remains unchanged.
Changed the required permission of **getSimTelephoneNumber** from **ohos.permission.GET_TELEPHONY_STATE** to **ohos.permission.GET_PHONE_NUMBERS**.
**Change Impact**
The required permission of **getSimTelephoneNumber** is from **ohos.permission.GET_TELEPHONY_STATE** to **ohos.permission.GET_PHONE_NUMBERS** since API version 10.
An application needs to be apply for the **ohos.permission.GET_PHONE_NUMBERS** permission. The API function remains unchanged.
In the **module.json** file, add the **ohos.permission.GET_PHONE_NUMBERS** permission or replace the existing permission with it. The sample code is as follows: