提交 b738fad7 编写于 作者: J jiaoyanlin3

Merge branch 'master' of https://gitee.com/openharmony/docs

# AI
- [Using MindSpore Lite for Model Inference (JS)](mindspore-lite-js-guidelines.md)
- [AI Development](ai-overview.md)
- [Using MindSpore Lite JavaScript APIs to Develop AI Applications](mindspore-guidelines-based-js.md)
- [Using MindSpore Lite Native APIs to Develop AI Applications](mindspore-guidelines-based-native.md)
# AI Development
## Overview
OpenHarmony provides native distributed AI capabilities. The AI subsystem consists of the following components:
- MindSpore Lite: an AI inference framework that provides unified APIs for AI inference.
- Neural Network Runtime (NNRt): an intermediate bridge that connects the inference framework and AI hardware.
## MindSpore Lite
MindSpore Lite is a built-in AI inference framework of OpenHarmony. It provides AI model inference capabilities for different hardware devices and end-to-end AI model inference solutions for developers to empower intelligent applications in all scenarios. Currently, MindSpore Lite has been widely used in applications such as image classification, target recognition, facial recognition, and character recognition.
**Figure 1** Development process for MindSpore Lite model inference
![MindSpore workflow](figures/mindspore_workflow.png)
The MindSpore Lite development process consists of two phases:
- Model conversion
MindSpore Lite uses models in `.ms` format for inference. You can use the model conversion tool provided by MindSpore Lite to convert third-party framework models, such as TensorFlow, TensorFlow Lite, Caffe, and ONNX, into `.ms` models. For details, see [Converting Models for Inference](https://www.mindspore.cn/lite/docs/en/r1.8/use/converter_tool.html).
- Model inference
You can call the MindSpore Lite runtime APIs to implement model inference. The procedure is as follows:
1. Create an inference context by setting the inference hardware and number of threads.
2. Load the **.ms** model file.
3. Set the model input data.
4. Perform model inference, and read the output.
MindSpore Lite is built in the OpenHarmony standard system as a system component. You can develop AI applications based on MindSpore Lite in the following ways:
- Method 1: [Using MindSpore Lite JavaScript APIs to develop AI applications](./mindspore-guidelines-based-js.md). You directly call MindSpore Lite JavaScript APIs in the UI code to load the AI model and perform model inference. An advantage of this method is the quick verification of the inference effect.
- Method 2: [Using MindSpore Lite native APIs to develop AI applications](./mindspore-guidelines-based-native.md). You encapsulate the algorithm models and the code for calling MindSpore Lite native APIs into a dynamic library, and then use N-API to encapsulate the dynamic library into JavaScript APIs for the UI to call.
## Neural Network Runtime
Neural Network Runtime (NNRt) functions as a bridge to connect the upper-layer AI inference framework and bottom-layer acceleration chip, implementing cross-chip inference computing of AI models.
MindSpore Lite supports configuration of the NNRt backend, and therefore you can directly configure MindSpore Lite to use the NNRt hardware. The focus of this topic is about how to develop AI applications using MindSpore Lite. For details about how to use NNRt, see [Connecting the Neural Network Runtime to an AI Inference Framework](../napi/neural-network-runtime-guidelines.md).
# Using MindSpore Lite for Model Inference (JS)
# Using MindSpore Lite JavaScript APIs to Develop AI Applications
## Scenarios
MindSpore Lite is an AI engine that implements AI model inference for different hardware devices. It has been used in a wide range of fields, such as image classification, target recognition, facial recognition, and character recognition.
This document describes the general development process for implementing MindSpore Lite model inference. For details about how to use native APIs to implement model inference, see [Using MindSpore Lite for Model Inference](../napi/mindspore-lite-guidelines.md).
You can use the JavaScript APIs provided by MindSpore Lite to directly integrate MindSpore Lite capabilities into the UI code. This way, you can quickly deploy AI algorithms for AI model inference.
## Basic Concepts
......@@ -27,16 +25,14 @@ APIs involved in MindSpore Lite model inference are categorized into context API
## How to Develop
The development process consists of the following main steps:
Assume that you have prepared a model in the **.ms** format. The key steps in model inference are model reading, model building, model inference, and memory release. The development procedure is described as follows:
1. Create a context, and set parameters such as the number of runtime threads and device type.
2. Load the model. In this example, the model is read from the file.
3. Load data. Before executing a model, you need to obtain the model input and then fill data in the input tensor.
4. Perform model inference by calling **predict**, and read the output.
1. Prepare the required model. You can download the required model directly or obtain the model by using the model conversion tool. The required data is read from the `bin` file.
- If the downloaded model is in the `.ms` format, you can use it directly for inference. This document uses `mnet.caffemodel.ms` as an example.
- If the downloaded model uses a third-party framework, such as TensorFlow, TensorFlow Lite, Caffe, or ONNX, you can use the [model conversion tool](https://www.mindspore.cn/lite/docs/en/r2.0/use/downloads.html#1-8-1) to convert it to the `.ms` format.
2. Create a context, and set parameters such as the number of runtime threads and device type.
3. Load the model. In this example, the model is read from the file.
4. Load data. Before executing a model, you need to obtain the model input and then fill data in the input tensor.
5. Perform inference and print the output. Call the **predict** API to perform model inference.
```js
@State inputName: string = 'mnet_caffemodel_nhwc.bin';
@State T_model_predict: string = 'Test_MSLiteModel_predict'
......@@ -49,7 +45,6 @@ build() {
.fontSize(30)
.fontWeight(FontWeight.Bold)
.onClick(async () => {
// 1. Prepare for a model.
let syscontext = globalThis.context;
syscontext.resourceManager.getRawFileContent(this.inputName).then((buffer) => {
this.inputBuffer = buffer;
......@@ -57,20 +52,24 @@ build() {
}).catch(error => {
console.error('Failed to get buffer, error code: ${error.code},message:${error.message}.');
})
// 2. Create a context.
// 1. Create a context.
let context: mindSporeLite.Context = {};
context.target = ['cpu'];
context.cpu = {}
context.cpu.threadNum = 1;
context.cpu.threadAffinityMode = 0;
context.cpu.precisionMode = 'enforce_fp32';
// 3. Load the model.
// 2. Load the model.
let modelFile = '/data/storage/el2/base/haps/entry/files/mnet.caffemodel.ms';
let msLiteModel = await mindSporeLite.loadModelFromFile(modelFile, context);
// 4. Load data.
// 3. Set the input data.
const modelInputs = msLiteModel.getInputs();
modelInputs[0].setData(this.inputBuffer.buffer);
// 5. Perform inference and print the output.
// 4. Perform inference and print the output.
console.log('=========MSLITE predict start=====')
msLiteModel.predict(modelInputs).then((modelOutputs) => {
let output0 = new Float32Array(modelOutputs[0].getData());
......@@ -89,21 +88,21 @@ build() {
## Debugging and Verification
1. Connect to the rk3568 development board on DevEco Studio, click **Run entry**, and compile your own HAP. The following information is displayed:
1. On DevEco Studio, connect to the device, click **Run entry**, and compile your own HAP. The following information is displayed:
```shell
Launching com.example.myapptfjs
$ hdc uninstall com.example.myapptfjs
$ hdc install -r "D:\TVOS\JSAPI\MyAppTfjs\entry\build\default\outputs\default\entry-default-signed.hap"
$ hdc install -r "path/to/xxx.hap"
$ hdc shell aa start -a EntryAbility -b com.example.myapptfjs
```
2. Use the hdc tool to connect to the rk3568 development board and push `mnet.caffemodel.ms` to the sandbox directory on the device. `mnet\_caffemodel\_nhwc.bin` is stored in the `rawfile` directory of the local project.
2. Use hdc to connect to the device, and push **mnet.caffemodel.ms** to the sandbox directory on the device. **mnet\_caffemodel\_nhwc.bin** is stored in the **rawfile** directory of the local project.
```shell
hdc -t 7001005458323933328a00bcdf423800 file send .\mnet.caffemodel.ms /data/app/el2/100/base/com.example.myapptfjs/haps/entry/files/
hdc -t your_device_id file send .\mnet.caffemodel.ms /data/app/el2/100/base/com.example.myapptfjs/haps/entry/files/
```
3. Click **Test\_MSLiteModel\_predict** on the screen of the rk3568 development board to run the test case. The following information is displayed in the HiLog printing result:
3. Click **Test\_MSLiteModel\_predict** on the device screen to run the test case. The following information is displayed in the HiLog printing result:
```shell
08-27 23:25:50.278 31782-31782/? I C03d00/JSAPP: =========MSLITE predict start=====
......
# Using MindSpore Lite Native APIs to Develop AI Applications
## Scenarios
You can use the native APIs provided by MindSpore Lite to deploy AI algorithms and provides APIs for the UI layer to invoke the algorithms for model inference. A typical scenario is the AI SDK development.
## Basic concepts
- [N-API](../reference/native-lib/third_party_napi/napi.md): a set of native APIs used to build JavaScript components. N-APIs can be used to encapsulate libraries developed using C/C++ into JavaScript modules.
## Preparing the Environment
- Install DevEco Studio 3.1.0.500 or later, and update the SDK to API version 10 or later.
## How to Develop
1. Create a native C++ project.
Open DevEco Studio, choose **File** > **New** > **Create Project** to create a native C++ template project. By default, the **entry/src/main/** directory of the created project contains the **cpp/** directory. You can store C/C++ code in this directory and provide JavaScript APIs for the UI layer to call the code.
2. Compile the C++ inference code.
Assume that you have prepared a model in the **.ms** format.
Before using the Native APIs provided by MindSpore Lite for development, you need to reference the corresponding header files.
```c
#include <mindspore/model.h>
#include <mindspore/context.h>
#include <mindspore/status.h>
#include <mindspore/tensor.h>
```
(1). Read model files.
```C++
void *ReadModelFile(NativeResourceManager *nativeResourceManager, const std::string &modelName, size_t *modelSize) {
auto rawFile = OH_ResourceManager_OpenRawFile(nativeResourceManager, modelName.c_str());
if (rawFile == nullptr) {
LOGE("Open model file failed");
return nullptr;
}
long fileSize = OH_ResourceManager_GetRawFileSize(rawFile);
void *modelBuffer = malloc(fileSize);
if (modelBuffer == nullptr) {
LOGE("Get model file size failed");
}
int ret = OH_ResourceManager_ReadRawFile(rawFile, modelBuffer, fileSize);
if (ret == 0) {
LOGI("Read model file failed");
OH_ResourceManager_CloseRawFile(rawFile);
return nullptr;
}
OH_ResourceManager_CloseRawFile(rawFile);
*modelSize = fileSize;
return modelBuffer;
}
```
(2). Create a context, set parameters such as the number of threads and device type, and load the model.
```c++
OH_AI_ModelHandle CreateMSLiteModel(void *modelBuffer, size_t modelSize) {
// Create a context.
auto context = OH_AI_ContextCreate();
if (context == nullptr) {
DestroyModelBuffer(&modelBuffer);
LOGE("Create MSLite context failed.\n");
return nullptr;
}
auto cpu_device_info = OH_AI_DeviceInfoCreate(OH_AI_DEVICETYPE_CPU);
OH_AI_ContextAddDeviceInfo(context, cpu_device_info);
// Load the .ms model file.
auto model = OH_AI_ModelCreate();
if (model == nullptr) {
DestroyModelBuffer(&modelBuffer);
LOGE("Allocate MSLite Model failed.\n");
return nullptr;
}
auto build_ret = OH_AI_ModelBuild(model, modelBuffer, modelSize, OH_AI_MODELTYPE_MINDIR, context);
DestroyModelBuffer(&modelBuffer);
if (build_ret != OH_AI_STATUS_SUCCESS) {
OH_AI_ModelDestroy(&model);
LOGE("Build MSLite model failed.\n");
return nullptr;
}
LOGI("Build MSLite model success.\n");
return model;
}
```
(3). Set the model input data, perform model inference, and obtain the output data.
```js
void RunMSLiteModel(OH_AI_ModelHandle model) {
// Set the model input data.
auto inputs = OH_AI_ModelGetInputs(model);
FillInputTensors(inputs);
auto outputs = OH_AI_ModelGetOutputs(model);
// Perform inference and print the output.
auto predict_ret = OH_AI_ModelPredict(model, inputs, &outputs, nullptr, nullptr);
if (predict_ret != OH_AI_STATUS_SUCCESS) {
OH_AI_ModelDestroy(&model);
LOGE("Predict MSLite model error.\n");
return;
}
LOGI("Run MSLite model success.\n");
LOGI("Get model outputs:\n");
for (size_t i = 0; i < outputs.handle_num; i++) {
auto tensor = outputs.handle_list[i];
LOGI("- Tensor %{public}d name is: %{public}s.\n", static_cast<int>(i), OH_AI_TensorGetName(tensor));
LOGI("- Tensor %{public}d size is: %{public}d.\n", static_cast<int>(i), (int)OH_AI_TensorGetDataSize(tensor));
auto out_data = reinterpret_cast<const float *>(OH_AI_TensorGetData(tensor));
std::cout << "Output data is:";
for (int i = 0; (i < OH_AI_TensorGetElementNum(tensor)) && (i <= kNumPrintOfOutData); i++) {
std::cout << out_data[i] << " ";
}
std::cout << std::endl;
}
OH_AI_ModelDestroy(&model);
}
```
(4). Implement a complete model inference process.
```C++
static napi_value RunDemo(napi_env env, napi_callback_info info)
{
LOGI("Enter runDemo()");
GET_PARAMS(env, info, 2);
napi_value error_ret;
napi_create_int32(env, -1, &error_ret);
const std::string modelName = "ml_headpose.ms";
size_t modelSize;
auto resourcesManager = OH_ResourceManager_InitNativeResourceManager(env, argv[1]);
auto modelBuffer = ReadModelFile(resourcesManager, modelName, &modelSize);
if (modelBuffer == nullptr) {
LOGE("Read model failed");
return error_ret;
}
LOGI("Read model file success");
auto model = CreateMSLiteModel(modelBuffer, modelSize);
if (model == nullptr) {
OH_AI_ModelDestroy(&model);
LOGE("MSLiteFwk Build model failed.\n");
return error_ret;
}
RunMSLiteModel(model);
napi_value success_ret;
napi_create_int32(env, 0, &success_ret);
LOGI("Exit runDemo()");
return success_ret;
}
```
(5). Write the **CMake** script to link the MindSpore Lite dynamic library `libmindspore_lite_ndk.so`.
```cmake
cmake_minimum_required(VERSION 3.4.1)
project(OHOSMSLiteNapi)
set(NATIVERENDER_ROOT_PATH ${CMAKE_CURRENT_SOURCE_DIR})
include_directories(${NATIVERENDER_ROOT_PATH}
${NATIVERENDER_ROOT_PATH}/include)
add_library(mslite_napi SHARED mslite_napi.cpp)
target_link_libraries(mslite_napi PUBLIC mindspore_lite_ndk) # MindSpore Lite dynamic library to link
target_link_libraries(mslite_napi PUBLIC hilog_ndk.z)
target_link_libraries(mslite_napi PUBLIC rawfile.z)
target_link_libraries(mslite_napi PUBLIC ace_napi.z)
```
3. Use N-APIs to encapsulate C++ dynamic libraries into JavaScript modules.
Create the **libmslite_api/** subdirectory in **entry/src/main/cpp/types/**, and create the **index.d.ts** file in the subdirectory. The file content is as follows:
```js
export const runDemo: (a:String, b:Object) => number;
```
Use the preceding code to define the JavaScript API `runDemo()`.
In addition, add the **oh-package.json5** file to associate the API with the **.so** file to form a complete JavaScript module.
```json
{
"name": "libmslite_napi.so",
"types": "./index.d.ts"
}
```
4. Invoke the encapsulated MindSpore module in the UI code.
In **entry/src/ets/MainAbility/pages/index.ets**, define the **onClick()** event and call the encapsulated **runDemo()** API in the event callback.
```js
import msliteNapi from'libmslite_napi.so' // Import the msliteNapi module.
// Certain code omitted
// Trigger the event when the text on the UI is tapped.
.onClick(() => {
resManager.getResourceManager().then(mgr => {
hilog.info(0x0000, TAG, '*** Start MSLite Demo ***');
let ret = 0;
ret = msliteNapi.runDemo("", mgr); // Call runDemo() to perform AI model inference.
if (ret == -1) {
hilog.info(0x0000, TAG, 'Error when running MSLite Demo!');
}
hilog.info(0x0000, TAG, '*** Finished MSLite Demo ***');
})
})
```
## Debugging and Verification
On DevEco Studio, connect to the device and click **Run entry**. The following log is generated for the application process:
```text
08-08 16:55:33.766 1513-1529/com.mslite.native_demo I A00000/MSLiteNativeDemo: *** Start MSLite Demo ***
08-08 16:55:33.766 1513-1529/com.mslite.native_demo I A00000/[MSLiteNapi]: Enter runDemo()
08-08 16:55:33.772 1513-1529/com.mslite.native_demo I A00000/[MSLiteNapi]: Read model file success
08-08 16:55:33.799 1513-1529/com.mslite.native_demo I A00000/[MSLiteNapi]: Build MSLite model success.
08-08 16:55:33.818 1513-1529/com.mslite.native_demo I A00000/[MSLiteNapi]: Run MSLite model success.
08-08 16:55:33.818 1513-1529/com.mslite.native_demo I A00000/[MSLiteNapi]: Get model outputs:
08-08 16:55:33.818 1513-1529/com.mslite.native_demo I A00000/[MSLiteNapi]: - Tensor 0 name is: output_node_0.
08-08 16:55:33.818 1513-1529/com.mslite.native_demo I A00000/[MSLiteNapi]: - Tensor 0 size is: 12.
08-08 16:55:33.826 1513-1529/com.mslite.native_demo I A00000/[MSLiteNapi]: Exit runDemo()
08-08 16:55:33.827 1513-1529/com.mslite.native_demo I A00000/MSLiteNativeDemo: *** Finished MSLite Demo ***
```
# Traffic Management
## Introduction
The traffic management module allows you to query real-time or historical data traffic by the specified network interface card (NIC) or user ID (UID).
Its functions include:
- Obtaining real-time traffic data by NIC or UID
- Obtaining historical traffic data by NIC or UID
- Subscribing to traffic change events by NIC or UID
> **NOTE**
> To maximize the application running efficiency, most API calls are called asynchronously in callback or promise mode. The following code examples use the callback mode. For details about the APIs, see [Traffic Management](../reference/apis/js-apis-net-statistics.md).
The following describes the development procedure specific to each application scenario.
## Available APIs
For the complete list of APIs and example code, see [Traffic Management](../reference/apis/js-apis-net-statistics.md).
| Type| API| Description|
| ---- | ---- | ---- |
| ohos.net.statistics | getIfaceRxBytes(nic: string, callback: AsyncCallback\<number>): void; |Obtains the real-time downlink data traffic of the specified NIC. |
| ohos.net.statistics | getIfaceTxBytes(nic: string, callback: AsyncCallback\<number>): void; |Obtains the real-time uplink data traffic of the specified NIC. |
| ohos.net.statistics | getCellularRxBytes(callback: AsyncCallback\<number>): void; |Obtains the real-time downlink data traffic of the cellular network.|
| ohos.net.statistics | getCellularTxBytes(callback: AsyncCallback\<number>): void; |Obtains the real-time uplink data traffic of the cellular network.|
| ohos.net.statistics | getAllRxBytes(callback: AsyncCallback\<number>): void; |Obtains the real-time downlink data traffic of the all NICs. |
| ohos.net.statistics | getAllTxBytes(callback: AsyncCallback\<number>): void; |Obtains the real-time uplink data traffic of the all NICs. |
| ohos.net.statistics | getUidRxBytes(uid: number, callback: AsyncCallback\<number>): void; |Obtains the real-time downlink data traffic of the specified application. |
| ohos.net.statistics | getUidTxBytes(uid: number, callback: AsyncCallback\<number>): void; |Obtains the real-time uplink data traffic of the specified application. |
| ohos.net.statistics | getTrafficStatsByIface(ifaceInfo: IfaceInfo, callback: AsyncCallback\<NetStatsInfo>): void; |Obtains the historical data traffic of the specified NIC. |
| ohos.net.statistics | getTrafficStatsByUid(uidInfo: UidInfo, callback: AsyncCallback\<NetStatsInfo>): void; |Obtains the historical data traffic of the specified application. |
| ohos.net.statistics | on(type: 'netStatsChange', callback: Callback\<{ iface: string, uid?: number }>): void |Subscribes to traffic change events.|
| ohos.net.statistics | off(type: 'netStatsChange', callback?: Callback\<{ iface: string, uid?: number }>): void; |Unsubscribes from traffic change events.|
## Obtaining Real-Time Traffic Data by NIC or UID
1. Obtain the real-time data traffic of the specified NIC.
2. Obtain the real-time data traffic of the cellular network.
3. Obtain the real-time data traffic of all NICs.
4. Obtain the real-time data traffic of the specified application.
```js
// Import the statistics namespace from @ohos.net.statistics.
import statistics from '@ohos.net.statistics'
// Obtain the real-time downlink data traffic of the specified NIC.
statistics.getIfaceRxBytes("wlan0", (error, stats) => {
console.log(JSON.stringify(error))
console.log(JSON.stringify(stats))
})
// Obtain the real-time uplink data traffic of the specified NIC.
statistics.getIfaceTxBytes("wlan0", (error, stats) => {
console.log(JSON.stringify(error))
console.log(JSON.stringify(stats))
})
// Obtain the real-time downlink data traffic of the cellular network.
statistics.getCellularRxBytes((error, stats) => {
console.log(JSON.stringify(error))
console.log(JSON.stringify(stats))
})
// Obtain the real-time uplink data traffic of the cellular network.
statistics.getCellularTxBytes((error, stats) => {
console.log(JSON.stringify(error))
console.log(JSON.stringify(stats))
})
// Obtain the real-time downlink data traffic of the all NICs.
statistics.getAllRxBytes((error, stats) => {
console.log(JSON.stringify(error))
console.log(JSON.stringify(stats))
})
// Obtain the real-time uplink data traffic of the all NICs.
statistics.getAllTxBytes((error, stats) => {
console.log(JSON.stringify(error))
console.log(JSON.stringify(stats))
})
// Obtain the real-time downlink data traffic of the specified application.
let uid = 20010038;
statistics.getUidRxBytes(uid, (error, stats) => {
console.log(JSON.stringify(error))
console.log(JSON.stringify(stats))
})
// Obtain the real-time uplink data traffic of the specified application.
let uid = 20010038;
statistics.getUidTxBytes(uid, (error, stats) => {
console.log(JSON.stringify(error))
console.log(JSON.stringify(stats))
})
```
## Obtaining Historical Traffic Data by NIC or UID
1. Obtain the historical data traffic of the specified NIC.
2. Obtain the historical data traffic of the specified application.
```js
let ifaceInfo = {
iface: "wlan0",
startTime: 1685948465,
endTime: 16859485670
}
// Obtain the historical data traffic of the specified NIC.
statistics.getTrafficStatsByIface(ifaceInfo), (error, statsInfo) => {
console.log(JSON.stringify(error))
console.log("getTrafficStatsByIface bytes of received = " + JSON.stringify(statsInfo.rxBytes));
console.log("getTrafficStatsByIface bytes of sent = " + JSON.stringify(statsInfo.txBytes));
console.log("getTrafficStatsByIface packets of received = " + JSON.stringify(statsInfo.rxPackets));
console.log("getTrafficStatsByIface packets of sent = " + JSON.stringify(statsInfo.txPackets));
});
let uidInfo = {
ifaceInfo: {
iface: "wlan0",
startTime: 1685948465,
endTime: 16859485670
},
uid: 20010037
}
// Obtain the historical data traffic of the specified application.
statistics.getTrafficStatsByUid(uidInfo), (error, statsInfo) => {
console.log(JSON.stringify(error))
console.log("getTrafficStatsByUid bytes of received = " + JSON.stringify(statsInfo.rxBytes));
console.log("getTrafficStatsByUid bytes of sent = " + JSON.stringify(statsInfo.txBytes));
console.log("getTrafficStatsByUid packets of received = " + JSON.stringify(statsInfo.rxPackets));
console.log("getTrafficStatsByUid packets of sent = " + JSON.stringify(statsInfo.txPackets));
});
```
## Subscribing to Traffic Change Events
1. Subscribe to traffic change events.
2. Unsubscribe from traffic change events.
```js
let callback = data => {
console.log("on netStatsChange, data:" + JSON.stringify(data));
}
// Subscribe to traffic change events.
statistics.on('netStatsChange', callback);
// Unsubscribe from traffic change events. You can pass the callback of the **on** function if you want to unsubscribe from a certain type of event. If you do not pass the callback, you will unsubscribe from all events.
statistics.off('netStatsChange', callback);
statistics.off('netStatsChange');
```
此差异已折叠。
......@@ -3,7 +3,6 @@
The **statistics** module provides APIs to query real-time or historical data traffic by the specified network interface card (NIC) or user ID (UID).
> **NOTE**
>
> The initial APIs of this module are supported since API version 10. Newly added APIs will be marked with a superscript to indicate their earliest API version.
## Modules to Import
......@@ -589,7 +588,7 @@ For details about the error codes, see [Traffic Management Error Codes](../error
on(type: 'netStatsChange', callback: Callback\<{ iface: string, uid?: number }>): void
Subscribes to data traffic change events.
Subscribes to traffic change events.
**System API**: This is a system API.
......@@ -602,7 +601,7 @@ Subscribes to data traffic change events.
| Name | Type | Mandatory| Description |
| -------- | --------------------------------------- | ---- | ---------- |
| type | string | Yes | Event type. This field has a fixed value of **netStatsChange**.|
| callback | Callback\<{ iface: string, uid?: number }\> | Yes | Callback invoked when the data traffic changes.<br>**iface**: NIC name.<br>**uid**: application UID.|
| callback | Callback\<{ iface: string, uid?: number }\> | Yes | Callback invoked when the traffic changes.<br>**iface**: NIC name.<br>**uid**: application UID.|
**Error codes**
......@@ -628,7 +627,7 @@ For details about the error codes, see [Traffic Management Error Codes](../error
off(type: 'netStatsChange', callback?: Callback\<{ iface: string, uid?: number }>): void;
Unsubscribes from data traffic change events.
Unsubscribes from traffic change events.
**System API**: This is a system API.
......@@ -641,7 +640,7 @@ Unsubscribes from data traffic change events.
| Name | Type | Mandatory| Description |
| -------- | --------------------------------------- | ---- | ---------- |
| type | string | Yes | Event type. This field has a fixed value of **netStatsChange**.|
| callback | Callback\<{ iface: string, uid?: number }\> | No | Callback invoked when the data traffic changes.<br>**iface**: NIC name.<br>uid: application UID.|
| callback | Callback\<{ iface: string, uid?: number }\> | No | Callback invoked when the traffic changes.<br>**iface**: NIC name.<br>uid: application UID.|
**Error codes**
......
......@@ -117,4 +117,59 @@ Data is added, deleted, and modified continuously without closing the read trans
**Solution**
1. Check for unclosed result sets or transactions.
2. Closes all result sets or transactions.
## 14800050 Failed to Obtain the Subscription Service
**Error Message**
Failed to obtain subscription service.
**Description**
The error code is returned when the subscription service failed to be obtained.
**Possible Causes**
The platform does not support service subscription.
**Solution**
Deploy the subscription service on the platform.
## 14801001 Stage Model Required
**Error Message**
Only supported in stage mode.
**Description**
This operation can be performed only on the stage model.
**Possible Causes**
The context is not a stage model.
**Solution**
Perform the operation on the stage model.
## 14801002 Invalid dataGroupId in storeConfig
**Error Message**
The data group id is not valid.
**Description**
The **dataGroupId** parameter is invalid.
**Possible Causes**
The **dataGroupId** is not obtained from the AppGallery.
**Solution**
Obtain **dataGroupId** from the AppGallery and pass it to **storeConfig** correctly.
# Policy Management Error Codes
> **NOTE**
>
> This topic describes only module-specific error codes. For details about universal error codes, see [Universal Error Codes](errorcode-universal.md).
## 2100001 Invalid Parameters
**Error Information**
Invalid parameter value.
**Description**
This error code is reported if any input parameter value is incorrect.
**Possible Causes**
The end time is earlier than the start time.
**Solution**
Check whether the start time and end time are properly set.
## 2100002 Service Connection Failure
**Error Information**
Operation failed. Cannot connect to service.
**Description**
This error code is reported if a service connection failure occurs.
**Possible Causes**
The service is abnormal.
**Solution**
Check whether system services are running properly.
## 2100003 System Internal Error
**Error Information**
System internal error.
**Description**
This error code is reported if an internal system error occurs.
**Possible Causes**
1. The memory is abnormal.
2. A null pointer is present.
**Solution**
1. Check whether the memory space is sufficient. If not, clear the memory and try again.
2. Check whether the system is normal. If not, try again later or restart the device.
......@@ -23,3 +23,57 @@ The possible causes are as follows:
1. Check that the file name is correct.
2. Check that the file path is correct.
## 15500019 Failed to Obtain the Subscription Service
**Error Message**
Failed to obtain subscription service.
**Description**
Failed to obtain the subscription service in inter-process event subscription.
**Possible Causes**
The platform does not support service subscription.
**Solution**
Deploy the subscription service on the platform.
## 14801001 Stage Model Required
**Error Message**
Only supported in stage mode.
**Description**
This operation can be performed only on the stage model.
**Possible Causes**
The context is not a stage model.
**Solution**
Perform the operation on the stage model.
## 15501002 The dataGroupId parameter in Options is invalid.
**Error Message**
The data group id is not valid.
**Description**
The **dataGroupId** parameter is invalid.
**Possible Causes**
The **dataGroupId** is not obtained from the AppGallery.
**Solution**
Obtain **dataGroupId** from the AppGallery and pass it correctly.
......@@ -1662,7 +1662,7 @@ When a key is generated or imported, [HuksUserAuthType](../reference/apis/js-api
| HUKS_USER_AUTH_TYPE_FINGERPRINT | 0x0001 | Fingerprint authentication, which can be enabled with facial authentication and PIN authentication at the same time. |
| HUKS_USER_AUTH_TYPE_FACE | 0x0002 | Facial authentication, whch can be enabled with fingerprint authentication and PIN authentication at the same time. |
| HUKS_USER_AUTH_TYPE_PIN | 0x0004 | PIN authentication, which can be enabled with fingerprint authentication and facial authenticationat the same time. |
| | | |
**Table 4** Secure access types
......@@ -1670,7 +1670,7 @@ When a key is generated or imported, [HuksUserAuthType](../reference/apis/js-api
| --------------------------------------- | ----- | ------------------------------------------------------------ |
| HUKS_AUTH_ACCESS_INVALID_CLEAR_PASSWORD | 1 | Invalidates the key after the screen lock password is cleared. |
| HUKS_AUTH_ACCESS_INVALID_NEW_BIO_ENROLL | 2 | Invalidates the key after a biometric enrollment is added. The user authentication types must include the biometric authentication. |
| | | |
**Table 5** Challenge types
......@@ -1679,7 +1679,7 @@ When a key is generated or imported, [HuksUserAuthType](../reference/apis/js-api
| HUKS_CHALLENGE_TYPE_NORMAL | 0 | Normal challenge, which requires an independent user authentication for each use of the key. |
| HUKS_CHALLENGE_TYPE_CUSTOM | 1 | Custom challenge, which supports only one user authentication for multiple keys. |
| HUKS_CHALLENGE_TYPE_NONE | 2 | No challenge is required during user authentication. |
| | | |
> **NOTICE**
>
......
# Telephony Subsystem Changelog
## cl.telephony.radio.1 isNrSupported API Change
NR is a proper noun and must be capitalized.
You need to adapt your application.
**Change Impact**
The JS API needs to be adapted for applications developed based on earlier versions. Otherwise, relevant functions will be affected.
**Key API/Component Changes**
- Involved APIs:
isNrSupported(): boolean;
isNrSupported(slotId: number): boolean;
- Before change:
```js
function isNrSupported(): boolean;
function isNrSupported(slotId: number): boolean;
```
- After change:
```js
function isNRSupported(): boolean;
function isNRSupported(slotId: number): boolean;
```
**Adaptation Guide**
Use the new API. The sample code is as follows:
```js
let result = radio.isNrSupported();
console.log("Result: "+ result);
```
```js
let slotId = 0;
let result = radio.isNRSupported(slotId);
console.log("Result: "+ result);
```
## cl.telephony.call.2 dial API Change
Changed the `dial` API to the `dialCall` API in the call module of the telephony subsystem since API version 9.
You need to adapt your application.
**Change Impact**
The `dial` API is deprecated and cannot be used anymore. Use the `dialCall` API instead. Otherwise, relevant functions will be affected.
**Key API/Component Changes**
- Involved APIs:
dial(phoneNumber: string, callback: AsyncCallback<boolean>): void;
dial(phoneNumber: string, options: DialOptions, callback: AsyncCallback<boolean>): void;
dial(phoneNumber: string, options?: DialOptions): Promise<boolean>;
- Before change:
```js
function dial(phoneNumber: string, callback: AsyncCallback<boolean>): void;
function dial(phoneNumber: string, options: DialOptions, callback: AsyncCallback<boolean>): void;
function dial(phoneNumber: string, options?: DialOptions): Promise<boolean>;
```
- After change:
```js
function dialCall(phoneNumber: string, callback: AsyncCallback<void>): void;
function dialCall(phoneNumber: string, options: DialCallOptions, callback: AsyncCallback<void>): void;
function dialCall(phoneNumber: string, options?: DialCallOptions): Promise<void>;
```
**Adaptation Guide**
The `dial` API is deprecated and cannot be used anymore. Use the `dialCall` API instead.
Use the new API. The sample code is as follows:
```js
call.dialCall("138xxxxxxxx", (err, data) => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`);
});
```
```js
call.dialCall("138xxxxxxxx", {
accountId: 0,
videoState: 0,
dialScene: 0,
dialType: 0,
}, (err, data) => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`);
});
```
```js
try {
call.dialCall('138xxxxxxxx');
console.log(`dialCall success, promise: data->${JSON.stringify(data)}`);
} catch (error) {
console.log(`dialCall fail, promise: err->${JSON.stringify(error)}`);
}
```
......@@ -64,7 +64,6 @@
- [跨端迁移(仅对系统应用开放)](hop-cross-device-migration.md)
- [多端协同(仅对系统应用开放)](hop-multi-device-collaboration.md)
- [订阅系统环境变量的变化](subscribe-system-environment-variable-changes.md)
- [原子化服务支持分享](atomic-services-support-sharing.md)
- 了解进程模型
- [进程模型概述](process-model-stage.md)
- 公共事件
......
......@@ -41,8 +41,8 @@
- [资源分类与访问](resource-categories-and-access.md)
- 学习ArkTS语言
- [初识ArkTS语言](arkts-get-started.md)
- [ArkTS语言介绍](arkts/introduction-to-arkts.md)
- [从TypeScript到ArkTS的迁移指导](arkts/typescript-to-arkts-migration-guide.md)
- [ArkTS语言介绍](introduction-to-arkts.md)
- [从TypeScript到ArkTS的迁移指导](typescript-to-arkts-migration-guide.md)
- UI范式
- 基本语法
- [基本语法概述](arkts-basic-syntax-overview.md)
......@@ -74,8 +74,11 @@
- [其他状态管理概述](arkts-other-state-mgmt-functions-overview.md)
- [\@Watch装饰器:状态变量更改通知](arkts-watch.md)
- [$$语法:内置组件双向同步](arkts-two-way-sync.md)
- [MVVM模式](arkts-mvvm.md)
- [状态管理优秀实践](arkts-state-management-best-practices.md)
- 渲染控制
- [渲染控制概述](arkts-rendering-control-overview.md)
- [if/else:条件渲染](arkts-rendering-control-ifelse.md)
- [ForEach:循环渲染](arkts-rendering-control-foreach.md)
- [LazyForEach:数据懒加载](arkts-rendering-control-lazyforeach.md)
- [渲染控制优秀实践](arkts-rendering-control-best-practices.md)
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册