未验证 提交 9c0e7943 编写于 作者: O openharmony_ci 提交者: Gitee

!21570 翻译已完成 20129

Merge pull request !21570 from shawn_he/20129-c
# AI # AI
- [Using MindSpore Lite for Model Inference (JS)](mindspore-lite-js-guidelines.md) - [AI Development](ai-overview.md)
- [Using MindSpore Lite JavaScript APIs to Develop AI Applications](mindspore-guidelines-based-js.md)
- [Using MindSpore Lite Native APIs to Develop AI Applications](mindspore-guidelines-based-native.md)
# AI Development
## Overview
OpenHarmony provides native distributed AI capabilities. The AI subsystem consists of the following components:
- MindSpore Lite: an AI inference framework that provides unified APIs for AI inference.
- Neural Network Runtime (NNRt): an intermediate bridge that connects the inference framework and AI hardware.
## MindSpore Lite
MindSpore Lite is a built-in AI inference framework of OpenHarmony. It provides AI model inference capabilities for different hardware devices and end-to-end AI model inference solutions for developers to empower intelligent applications in all scenarios. Currently, MindSpore Lite has been widely used in applications such as image classification, target recognition, facial recognition, and character recognition.
**Figure 1** Development process for MindSpore Lite model inference
![MindSpore workflow](figures/mindspore_workflow.png)
The MindSpore Lite development process consists of two phases:
- Model conversion
MindSpore Lite uses models in `.ms` format for inference. You can use the model conversion tool provided by MindSpore Lite to convert third-party framework models, such as TensorFlow, TensorFlow Lite, Caffe, and ONNX, into `.ms` models. For details, see [Converting Models for Inference](https://www.mindspore.cn/lite/docs/en/r1.8/use/converter_tool.html).
- Model inference
You can call the MindSpore Lite runtime APIs to implement model inference. The procedure is as follows:
1. Create an inference context by setting the inference hardware and number of threads.
2. Load the **.ms** model file.
3. Set the model input data.
4. Perform model inference, and read the output.
MindSpore Lite is built in the OpenHarmony standard system as a system component. You can develop AI applications based on MindSpore Lite in the following ways:
- Method 1: [Using MindSpore Lite JavaScript APIs to develop AI applications](./mindspore-guidelines-based-js.md). You directly call MindSpore Lite JavaScript APIs in the UI code to load the AI model and perform model inference. An advantage of this method is the quick verification of the inference effect.
- Method 2: [Using MindSpore Lite native APIs to develop AI applications](./mindspore-guidelines-based-native.md). You encapsulate the algorithm models and the code for calling MindSpore Lite native APIs into a dynamic library, and then use N-API to encapsulate the dynamic library into JavaScript APIs for the UI to call.
## Neural Network Runtime
Neural Network Runtime (NNRt) functions as a bridge to connect the upper-layer AI inference framework and bottom-layer acceleration chip, implementing cross-chip inference computing of AI models.
MindSpore Lite supports configuration of the NNRt backend, and therefore you can directly configure MindSpore Lite to use the NNRt hardware. The focus of this topic is about how to develop AI applications using MindSpore Lite. For details about how to use NNRt, see [Connecting the Neural Network Runtime to an AI Inference Framework](../napi/neural-network-runtime-guidelines.md).
# Using MindSpore Lite for Model Inference (JS) # Using MindSpore Lite JavaScript APIs to Develop AI Applications
## Scenarios ## Scenarios
MindSpore Lite is an AI engine that implements AI model inference for different hardware devices. It has been used in a wide range of fields, such as image classification, target recognition, facial recognition, and character recognition. You can use the JavaScript APIs provided by MindSpore Lite to directly integrate MindSpore Lite capabilities into the UI code. This way, you can quickly deploy AI algorithms for AI model inference.
This document describes the general development process for implementing MindSpore Lite model inference. For details about how to use native APIs to implement model inference, see [Using MindSpore Lite for Model Inference](../napi/mindspore-lite-guidelines.md).
## Basic Concepts ## Basic Concepts
...@@ -27,16 +25,14 @@ APIs involved in MindSpore Lite model inference are categorized into context API ...@@ -27,16 +25,14 @@ APIs involved in MindSpore Lite model inference are categorized into context API
## How to Develop ## How to Develop
The development process consists of the following main steps: Assume that you have prepared a model in the **.ms** format. The key steps in model inference are model reading, model building, model inference, and memory release. The development procedure is described as follows:
1. Create a context, and set parameters such as the number of runtime threads and device type.
2. Load the model. In this example, the model is read from the file.
3. Load data. Before executing a model, you need to obtain the model input and then fill data in the input tensor.
4. Perform model inference by calling **predict**, and read the output.
1. Prepare the required model. You can download the required model directly or obtain the model by using the model conversion tool. The required data is read from the `bin` file.
- If the downloaded model is in the `.ms` format, you can use it directly for inference. This document uses `mnet.caffemodel.ms` as an example.
- If the downloaded model uses a third-party framework, such as TensorFlow, TensorFlow Lite, Caffe, or ONNX, you can use the [model conversion tool](https://www.mindspore.cn/lite/docs/en/r2.0/use/downloads.html#1-8-1) to convert it to the `.ms` format.
2. Create a context, and set parameters such as the number of runtime threads and device type.
3. Load the model. In this example, the model is read from the file.
4. Load data. Before executing a model, you need to obtain the model input and then fill data in the input tensor.
5. Perform inference and print the output. Call the **predict** API to perform model inference.
```js ```js
@State inputName: string = 'mnet_caffemodel_nhwc.bin'; @State inputName: string = 'mnet_caffemodel_nhwc.bin';
@State T_model_predict: string = 'Test_MSLiteModel_predict' @State T_model_predict: string = 'Test_MSLiteModel_predict'
...@@ -49,7 +45,6 @@ build() { ...@@ -49,7 +45,6 @@ build() {
.fontSize(30) .fontSize(30)
.fontWeight(FontWeight.Bold) .fontWeight(FontWeight.Bold)
.onClick(async () => { .onClick(async () => {
// 1. Prepare for a model.
let syscontext = globalThis.context; let syscontext = globalThis.context;
syscontext.resourceManager.getRawFileContent(this.inputName).then((buffer) => { syscontext.resourceManager.getRawFileContent(this.inputName).then((buffer) => {
this.inputBuffer = buffer; this.inputBuffer = buffer;
...@@ -57,20 +52,24 @@ build() { ...@@ -57,20 +52,24 @@ build() {
}).catch(error => { }).catch(error => {
console.error('Failed to get buffer, error code: ${error.code},message:${error.message}.'); console.error('Failed to get buffer, error code: ${error.code},message:${error.message}.');
}) })
// 2. Create a context.
// 1. Create a context.
let context: mindSporeLite.Context = {}; let context: mindSporeLite.Context = {};
context.target = ['cpu']; context.target = ['cpu'];
context.cpu = {} context.cpu = {}
context.cpu.threadNum = 1; context.cpu.threadNum = 1;
context.cpu.threadAffinityMode = 0; context.cpu.threadAffinityMode = 0;
context.cpu.precisionMode = 'enforce_fp32'; context.cpu.precisionMode = 'enforce_fp32';
// 3. Load the model.
// 2. Load the model.
let modelFile = '/data/storage/el2/base/haps/entry/files/mnet.caffemodel.ms'; let modelFile = '/data/storage/el2/base/haps/entry/files/mnet.caffemodel.ms';
let msLiteModel = await mindSporeLite.loadModelFromFile(modelFile, context); let msLiteModel = await mindSporeLite.loadModelFromFile(modelFile, context);
// 4. Load data.
// 3. Set the input data.
const modelInputs = msLiteModel.getInputs(); const modelInputs = msLiteModel.getInputs();
modelInputs[0].setData(this.inputBuffer.buffer); modelInputs[0].setData(this.inputBuffer.buffer);
// 5. Perform inference and print the output.
// 4. Perform inference and print the output.
console.log('=========MSLITE predict start=====') console.log('=========MSLITE predict start=====')
msLiteModel.predict(modelInputs).then((modelOutputs) => { msLiteModel.predict(modelInputs).then((modelOutputs) => {
let output0 = new Float32Array(modelOutputs[0].getData()); let output0 = new Float32Array(modelOutputs[0].getData());
...@@ -89,21 +88,21 @@ build() { ...@@ -89,21 +88,21 @@ build() {
## Debugging and Verification ## Debugging and Verification
1. Connect to the rk3568 development board on DevEco Studio, click **Run entry**, and compile your own HAP. The following information is displayed: 1. On DevEco Studio, connect to the device, click **Run entry**, and compile your own HAP. The following information is displayed:
```shell ```shell
Launching com.example.myapptfjs Launching com.example.myapptfjs
$ hdc uninstall com.example.myapptfjs $ hdc uninstall com.example.myapptfjs
$ hdc install -r "D:\TVOS\JSAPI\MyAppTfjs\entry\build\default\outputs\default\entry-default-signed.hap" $ hdc install -r "path/to/xxx.hap"
$ hdc shell aa start -a EntryAbility -b com.example.myapptfjs $ hdc shell aa start -a EntryAbility -b com.example.myapptfjs
``` ```
2. Use the hdc tool to connect to the rk3568 development board and push `mnet.caffemodel.ms` to the sandbox directory on the device. `mnet\_caffemodel\_nhwc.bin` is stored in the `rawfile` directory of the local project. 2. Use hdc to connect to the device, and push **mnet.caffemodel.ms** to the sandbox directory on the device. **mnet\_caffemodel\_nhwc.bin** is stored in the **rawfile** directory of the local project.
```shell ```shell
hdc -t 7001005458323933328a00bcdf423800 file send .\mnet.caffemodel.ms /data/app/el2/100/base/com.example.myapptfjs/haps/entry/files/ hdc -t your_device_id file send .\mnet.caffemodel.ms /data/app/el2/100/base/com.example.myapptfjs/haps/entry/files/
``` ```
3. Click **Test\_MSLiteModel\_predict** on the screen of the rk3568 development board to run the test case. The following information is displayed in the HiLog printing result: 3. Click **Test\_MSLiteModel\_predict** on the device screen to run the test case. The following information is displayed in the HiLog printing result:
```shell ```shell
08-27 23:25:50.278 31782-31782/? I C03d00/JSAPP: =========MSLITE predict start===== 08-27 23:25:50.278 31782-31782/? I C03d00/JSAPP: =========MSLITE predict start=====
......
# Using MindSpore Lite Native APIs to Develop AI Applications
## Scenarios
You can use the native APIs provided by MindSpore Lite to deploy AI algorithms and provides APIs for the UI layer to invoke the algorithms for model inference. A typical scenario is the AI SDK development.
## Basic concepts
- [N-API](../reference/native-lib/third_party_napi/napi.md): a set of native APIs used to build JavaScript components. N-APIs can be used to encapsulate libraries developed using C/C++ into JavaScript modules.
## Preparing the Environment
- Install DevEco Studio 3.1.0.500 or later, and update the SDK to API version 10 or later.
## How to Develop
1. Create a native C++ project.
Open DevEco Studio, choose **File** > **New** > **Create Project** to create a native C++ template project. By default, the **entry/src/main/** directory of the created project contains the **cpp/** directory. You can store C/C++ code in this directory and provide JavaScript APIs for the UI layer to call the code.
2. Compile the C++ inference code.
Assume that you have prepared a model in the **.ms** format.
Before using the Native APIs provided by MindSpore Lite for development, you need to reference the corresponding header files.
```c
#include <mindspore/model.h>
#include <mindspore/context.h>
#include <mindspore/status.h>
#include <mindspore/tensor.h>
```
(1). Read model files.
```C++
void *ReadModelFile(NativeResourceManager *nativeResourceManager, const std::string &modelName, size_t *modelSize) {
auto rawFile = OH_ResourceManager_OpenRawFile(nativeResourceManager, modelName.c_str());
if (rawFile == nullptr) {
LOGE("Open model file failed");
return nullptr;
}
long fileSize = OH_ResourceManager_GetRawFileSize(rawFile);
void *modelBuffer = malloc(fileSize);
if (modelBuffer == nullptr) {
LOGE("Get model file size failed");
}
int ret = OH_ResourceManager_ReadRawFile(rawFile, modelBuffer, fileSize);
if (ret == 0) {
LOGI("Read model file failed");
OH_ResourceManager_CloseRawFile(rawFile);
return nullptr;
}
OH_ResourceManager_CloseRawFile(rawFile);
*modelSize = fileSize;
return modelBuffer;
}
```
(2). Create a context, set parameters such as the number of threads and device type, and load the model.
```c++
OH_AI_ModelHandle CreateMSLiteModel(void *modelBuffer, size_t modelSize) {
// Create a context.
auto context = OH_AI_ContextCreate();
if (context == nullptr) {
DestroyModelBuffer(&modelBuffer);
LOGE("Create MSLite context failed.\n");
return nullptr;
}
auto cpu_device_info = OH_AI_DeviceInfoCreate(OH_AI_DEVICETYPE_CPU);
OH_AI_ContextAddDeviceInfo(context, cpu_device_info);
// Load the .ms model file.
auto model = OH_AI_ModelCreate();
if (model == nullptr) {
DestroyModelBuffer(&modelBuffer);
LOGE("Allocate MSLite Model failed.\n");
return nullptr;
}
auto build_ret = OH_AI_ModelBuild(model, modelBuffer, modelSize, OH_AI_MODELTYPE_MINDIR, context);
DestroyModelBuffer(&modelBuffer);
if (build_ret != OH_AI_STATUS_SUCCESS) {
OH_AI_ModelDestroy(&model);
LOGE("Build MSLite model failed.\n");
return nullptr;
}
LOGI("Build MSLite model success.\n");
return model;
}
```
(3). Set the model input data, perform model inference, and obtain the output data.
```js
void RunMSLiteModel(OH_AI_ModelHandle model) {
// Set the model input data.
auto inputs = OH_AI_ModelGetInputs(model);
FillInputTensors(inputs);
auto outputs = OH_AI_ModelGetOutputs(model);
// Perform inference and print the output.
auto predict_ret = OH_AI_ModelPredict(model, inputs, &outputs, nullptr, nullptr);
if (predict_ret != OH_AI_STATUS_SUCCESS) {
OH_AI_ModelDestroy(&model);
LOGE("Predict MSLite model error.\n");
return;
}
LOGI("Run MSLite model success.\n");
LOGI("Get model outputs:\n");
for (size_t i = 0; i < outputs.handle_num; i++) {
auto tensor = outputs.handle_list[i];
LOGI("- Tensor %{public}d name is: %{public}s.\n", static_cast<int>(i), OH_AI_TensorGetName(tensor));
LOGI("- Tensor %{public}d size is: %{public}d.\n", static_cast<int>(i), (int)OH_AI_TensorGetDataSize(tensor));
auto out_data = reinterpret_cast<const float *>(OH_AI_TensorGetData(tensor));
std::cout << "Output data is:";
for (int i = 0; (i < OH_AI_TensorGetElementNum(tensor)) && (i <= kNumPrintOfOutData); i++) {
std::cout << out_data[i] << " ";
}
std::cout << std::endl;
}
OH_AI_ModelDestroy(&model);
}
```
(4). Implement a complete model inference process.
```C++
static napi_value RunDemo(napi_env env, napi_callback_info info)
{
LOGI("Enter runDemo()");
GET_PARAMS(env, info, 2);
napi_value error_ret;
napi_create_int32(env, -1, &error_ret);
const std::string modelName = "ml_headpose.ms";
size_t modelSize;
auto resourcesManager = OH_ResourceManager_InitNativeResourceManager(env, argv[1]);
auto modelBuffer = ReadModelFile(resourcesManager, modelName, &modelSize);
if (modelBuffer == nullptr) {
LOGE("Read model failed");
return error_ret;
}
LOGI("Read model file success");
auto model = CreateMSLiteModel(modelBuffer, modelSize);
if (model == nullptr) {
OH_AI_ModelDestroy(&model);
LOGE("MSLiteFwk Build model failed.\n");
return error_ret;
}
RunMSLiteModel(model);
napi_value success_ret;
napi_create_int32(env, 0, &success_ret);
LOGI("Exit runDemo()");
return success_ret;
}
```
(5). Write the **CMake** script to link the MindSpore Lite dynamic library `libmindspore_lite_ndk.so`.
```cmake
cmake_minimum_required(VERSION 3.4.1)
project(OHOSMSLiteNapi)
set(NATIVERENDER_ROOT_PATH ${CMAKE_CURRENT_SOURCE_DIR})
include_directories(${NATIVERENDER_ROOT_PATH}
${NATIVERENDER_ROOT_PATH}/include)
add_library(mslite_napi SHARED mslite_napi.cpp)
target_link_libraries(mslite_napi PUBLIC mindspore_lite_ndk) # MindSpore Lite dynamic library to link
target_link_libraries(mslite_napi PUBLIC hilog_ndk.z)
target_link_libraries(mslite_napi PUBLIC rawfile.z)
target_link_libraries(mslite_napi PUBLIC ace_napi.z)
```
3. Use N-APIs to encapsulate C++ dynamic libraries into JavaScript modules.
Create the **libmslite_api/** subdirectory in **entry/src/main/cpp/types/**, and create the **index.d.ts** file in the subdirectory. The file content is as follows:
```js
export const runDemo: (a:String, b:Object) => number;
```
Use the preceding code to define the JavaScript API `runDemo()`.
In addition, add the **oh-package.json5** file to associate the API with the **.so** file to form a complete JavaScript module.
```json
{
"name": "libmslite_napi.so",
"types": "./index.d.ts"
}
```
4. Invoke the encapsulated MindSpore module in the UI code.
In **entry/src/ets/MainAbility/pages/index.ets**, define the **onClick()** event and call the encapsulated **runDemo()** API in the event callback.
```js
import msliteNapi from'libmslite_napi.so' // Import the msliteNapi module.
// Certain code omitted
// Trigger the event when the text on the UI is tapped.
.onClick(() => {
resManager.getResourceManager().then(mgr => {
hilog.info(0x0000, TAG, '*** Start MSLite Demo ***');
let ret = 0;
ret = msliteNapi.runDemo("", mgr); // Call runDemo() to perform AI model inference.
if (ret == -1) {
hilog.info(0x0000, TAG, 'Error when running MSLite Demo!');
}
hilog.info(0x0000, TAG, '*** Finished MSLite Demo ***');
})
})
```
## Debugging and Verification
On DevEco Studio, connect to the device and click **Run entry**. The following log is generated for the application process:
```text
08-08 16:55:33.766 1513-1529/com.mslite.native_demo I A00000/MSLiteNativeDemo: *** Start MSLite Demo ***
08-08 16:55:33.766 1513-1529/com.mslite.native_demo I A00000/[MSLiteNapi]: Enter runDemo()
08-08 16:55:33.772 1513-1529/com.mslite.native_demo I A00000/[MSLiteNapi]: Read model file success
08-08 16:55:33.799 1513-1529/com.mslite.native_demo I A00000/[MSLiteNapi]: Build MSLite model success.
08-08 16:55:33.818 1513-1529/com.mslite.native_demo I A00000/[MSLiteNapi]: Run MSLite model success.
08-08 16:55:33.818 1513-1529/com.mslite.native_demo I A00000/[MSLiteNapi]: Get model outputs:
08-08 16:55:33.818 1513-1529/com.mslite.native_demo I A00000/[MSLiteNapi]: - Tensor 0 name is: output_node_0.
08-08 16:55:33.818 1513-1529/com.mslite.native_demo I A00000/[MSLiteNapi]: - Tensor 0 size is: 12.
08-08 16:55:33.826 1513-1529/com.mslite.native_demo I A00000/[MSLiteNapi]: Exit runDemo()
08-08 16:55:33.827 1513-1529/com.mslite.native_demo I A00000/MSLiteNativeDemo: *** Finished MSLite Demo ***
```
# Traffic Management
## Introduction
The traffic management module allows you to query real-time or historical data traffic by the specified network interface card (NIC) or user ID (UID).
Its functions include:
- Obtaining real-time traffic data by NIC or UID
- Obtaining historical traffic data by NIC or UID
- Subscribing to traffic change events by NIC or UID
> **NOTE**
> To maximize the application running efficiency, most API calls are called asynchronously in callback or promise mode. The following code examples use the callback mode. For details about the APIs, see [Traffic Management](../reference/apis/js-apis-net-statistics.md).
The following describes the development procedure specific to each application scenario.
## Available APIs
For the complete list of APIs and example code, see [Traffic Management](../reference/apis/js-apis-net-statistics.md).
| Type| API| Description|
| ---- | ---- | ---- |
| ohos.net.statistics | getIfaceRxBytes(nic: string, callback: AsyncCallback\<number>): void; |Obtains the real-time downlink data traffic of the specified NIC. |
| ohos.net.statistics | getIfaceTxBytes(nic: string, callback: AsyncCallback\<number>): void; |Obtains the real-time uplink data traffic of the specified NIC. |
| ohos.net.statistics | getCellularRxBytes(callback: AsyncCallback\<number>): void; |Obtains the real-time downlink data traffic of the cellular network.|
| ohos.net.statistics | getCellularTxBytes(callback: AsyncCallback\<number>): void; |Obtains the real-time uplink data traffic of the cellular network.|
| ohos.net.statistics | getAllRxBytes(callback: AsyncCallback\<number>): void; |Obtains the real-time downlink data traffic of the all NICs. |
| ohos.net.statistics | getAllTxBytes(callback: AsyncCallback\<number>): void; |Obtains the real-time uplink data traffic of the all NICs. |
| ohos.net.statistics | getUidRxBytes(uid: number, callback: AsyncCallback\<number>): void; |Obtains the real-time downlink data traffic of the specified application. |
| ohos.net.statistics | getUidTxBytes(uid: number, callback: AsyncCallback\<number>): void; |Obtains the real-time uplink data traffic of the specified application. |
| ohos.net.statistics | getTrafficStatsByIface(ifaceInfo: IfaceInfo, callback: AsyncCallback\<NetStatsInfo>): void; |Obtains the historical data traffic of the specified NIC. |
| ohos.net.statistics | getTrafficStatsByUid(uidInfo: UidInfo, callback: AsyncCallback\<NetStatsInfo>): void; |Obtains the historical data traffic of the specified application. |
| ohos.net.statistics | on(type: 'netStatsChange', callback: Callback\<{ iface: string, uid?: number }>): void |Subscribes to traffic change events.|
| ohos.net.statistics | off(type: 'netStatsChange', callback?: Callback\<{ iface: string, uid?: number }>): void; |Unsubscribes from traffic change events.|
## Obtaining Real-Time Traffic Data by NIC or UID
1. Obtain the real-time data traffic of the specified NIC.
2. Obtain the real-time data traffic of the cellular network.
3. Obtain the real-time data traffic of all NICs.
4. Obtain the real-time data traffic of the specified application.
```js
// Import the statistics namespace from @ohos.net.statistics.
import statistics from '@ohos.net.statistics'
// Obtain the real-time downlink data traffic of the specified NIC.
statistics.getIfaceRxBytes("wlan0", (error, stats) => {
console.log(JSON.stringify(error))
console.log(JSON.stringify(stats))
})
// Obtain the real-time uplink data traffic of the specified NIC.
statistics.getIfaceTxBytes("wlan0", (error, stats) => {
console.log(JSON.stringify(error))
console.log(JSON.stringify(stats))
})
// Obtain the real-time downlink data traffic of the cellular network.
statistics.getCellularRxBytes((error, stats) => {
console.log(JSON.stringify(error))
console.log(JSON.stringify(stats))
})
// Obtain the real-time uplink data traffic of the cellular network.
statistics.getCellularTxBytes((error, stats) => {
console.log(JSON.stringify(error))
console.log(JSON.stringify(stats))
})
// Obtain the real-time downlink data traffic of the all NICs.
statistics.getAllRxBytes((error, stats) => {
console.log(JSON.stringify(error))
console.log(JSON.stringify(stats))
})
// Obtain the real-time uplink data traffic of the all NICs.
statistics.getAllTxBytes((error, stats) => {
console.log(JSON.stringify(error))
console.log(JSON.stringify(stats))
})
// Obtain the real-time downlink data traffic of the specified application.
let uid = 20010038;
statistics.getUidRxBytes(uid, (error, stats) => {
console.log(JSON.stringify(error))
console.log(JSON.stringify(stats))
})
// Obtain the real-time uplink data traffic of the specified application.
let uid = 20010038;
statistics.getUidTxBytes(uid, (error, stats) => {
console.log(JSON.stringify(error))
console.log(JSON.stringify(stats))
})
```
## Obtaining Historical Traffic Data by NIC or UID
1. Obtain the historical data traffic of the specified NIC.
2. Obtain the historical data traffic of the specified application.
```js
let ifaceInfo = {
iface: "wlan0",
startTime: 1685948465,
endTime: 16859485670
}
// Obtain the historical data traffic of the specified NIC.
statistics.getTrafficStatsByIface(ifaceInfo), (error, statsInfo) => {
console.log(JSON.stringify(error))
console.log("getTrafficStatsByIface bytes of received = " + JSON.stringify(statsInfo.rxBytes));
console.log("getTrafficStatsByIface bytes of sent = " + JSON.stringify(statsInfo.txBytes));
console.log("getTrafficStatsByIface packets of received = " + JSON.stringify(statsInfo.rxPackets));
console.log("getTrafficStatsByIface packets of sent = " + JSON.stringify(statsInfo.txPackets));
});
let uidInfo = {
ifaceInfo: {
iface: "wlan0",
startTime: 1685948465,
endTime: 16859485670
},
uid: 20010037
}
// Obtain the historical data traffic of the specified application.
statistics.getTrafficStatsByUid(uidInfo), (error, statsInfo) => {
console.log(JSON.stringify(error))
console.log("getTrafficStatsByUid bytes of received = " + JSON.stringify(statsInfo.rxBytes));
console.log("getTrafficStatsByUid bytes of sent = " + JSON.stringify(statsInfo.txBytes));
console.log("getTrafficStatsByUid packets of received = " + JSON.stringify(statsInfo.rxPackets));
console.log("getTrafficStatsByUid packets of sent = " + JSON.stringify(statsInfo.txPackets));
});
```
## Subscribing to Traffic Change Events
1. Subscribe to traffic change events.
2. Unsubscribe from traffic change events.
```js
let callback = data => {
console.log("on netStatsChange, data:" + JSON.stringify(data));
}
// Subscribe to traffic change events.
statistics.on('netStatsChange', callback);
// Unsubscribe from traffic change events. You can pass the callback of the **on** function if you want to unsubscribe from a certain type of event. If you do not pass the callback, you will unsubscribe from all events.
statistics.off('netStatsChange', callback);
statistics.off('netStatsChange');
```
此差异已折叠。
...@@ -3,7 +3,6 @@ ...@@ -3,7 +3,6 @@
The **statistics** module provides APIs to query real-time or historical data traffic by the specified network interface card (NIC) or user ID (UID). The **statistics** module provides APIs to query real-time or historical data traffic by the specified network interface card (NIC) or user ID (UID).
> **NOTE** > **NOTE**
>
> The initial APIs of this module are supported since API version 10. Newly added APIs will be marked with a superscript to indicate their earliest API version. > The initial APIs of this module are supported since API version 10. Newly added APIs will be marked with a superscript to indicate their earliest API version.
## Modules to Import ## Modules to Import
...@@ -589,7 +588,7 @@ For details about the error codes, see [Traffic Management Error Codes](../error ...@@ -589,7 +588,7 @@ For details about the error codes, see [Traffic Management Error Codes](../error
on(type: 'netStatsChange', callback: Callback\<{ iface: string, uid?: number }>): void on(type: 'netStatsChange', callback: Callback\<{ iface: string, uid?: number }>): void
Subscribes to data traffic change events. Subscribes to traffic change events.
**System API**: This is a system API. **System API**: This is a system API.
...@@ -602,7 +601,7 @@ Subscribes to data traffic change events. ...@@ -602,7 +601,7 @@ Subscribes to data traffic change events.
| Name | Type | Mandatory| Description | | Name | Type | Mandatory| Description |
| -------- | --------------------------------------- | ---- | ---------- | | -------- | --------------------------------------- | ---- | ---------- |
| type | string | Yes | Event type. This field has a fixed value of **netStatsChange**.| | type | string | Yes | Event type. This field has a fixed value of **netStatsChange**.|
| callback | Callback\<{ iface: string, uid?: number }\> | Yes | Callback invoked when the data traffic changes.<br>**iface**: NIC name.<br>**uid**: application UID.| | callback | Callback\<{ iface: string, uid?: number }\> | Yes | Callback invoked when the traffic changes.<br>**iface**: NIC name.<br>**uid**: application UID.|
**Error codes** **Error codes**
...@@ -628,7 +627,7 @@ For details about the error codes, see [Traffic Management Error Codes](../error ...@@ -628,7 +627,7 @@ For details about the error codes, see [Traffic Management Error Codes](../error
off(type: 'netStatsChange', callback?: Callback\<{ iface: string, uid?: number }>): void; off(type: 'netStatsChange', callback?: Callback\<{ iface: string, uid?: number }>): void;
Unsubscribes from data traffic change events. Unsubscribes from traffic change events.
**System API**: This is a system API. **System API**: This is a system API.
...@@ -641,7 +640,7 @@ Unsubscribes from data traffic change events. ...@@ -641,7 +640,7 @@ Unsubscribes from data traffic change events.
| Name | Type | Mandatory| Description | | Name | Type | Mandatory| Description |
| -------- | --------------------------------------- | ---- | ---------- | | -------- | --------------------------------------- | ---- | ---------- |
| type | string | Yes | Event type. This field has a fixed value of **netStatsChange**.| | type | string | Yes | Event type. This field has a fixed value of **netStatsChange**.|
| callback | Callback\<{ iface: string, uid?: number }\> | No | Callback invoked when the data traffic changes.<br>**iface**: NIC name.<br>uid: application UID.| | callback | Callback\<{ iface: string, uid?: number }\> | No | Callback invoked when the traffic changes.<br>**iface**: NIC name.<br>uid: application UID.|
**Error codes** **Error codes**
......
# Policy Management Error Codes
> **NOTE**
>
> This topic describes only module-specific error codes. For details about universal error codes, see [Universal Error Codes](errorcode-universal.md).
## 2100001 Invalid Parameters
**Error Information**
Invalid parameter value.
**Description**
This error code is reported if any input parameter value is incorrect.
**Possible Causes**
The end time is earlier than the start time.
**Solution**
Check whether the start time and end time are properly set.
## 2100002 Service Connection Failure
**Error Information**
Operation failed. Cannot connect to service.
**Description**
This error code is reported if a service connection failure occurs.
**Possible Causes**
The service is abnormal.
**Solution**
Check whether system services are running properly.
## 2100003 System Internal Error
**Error Information**
System internal error.
**Description**
This error code is reported if an internal system error occurs.
**Possible Causes**
1. The memory is abnormal.
2. A null pointer is present.
**Solution**
1. Check whether the memory space is sufficient. If not, clear the memory and try again.
2. Check whether the system is normal. If not, try again later or restart the device.
# Obtaining Source Code<a name="EN-US_TOPIC_0000001150448437"></a> # Obtaining Source Code
## About OpenHarmony<a name="section6370143622110"></a>
OpenHarmony is an open-source project launched by the OpenAtom Foundation. The purpose of this project is to build an open, distributed operating system \(OS\) framework for smart IoT devices in the full-scenario, full-connectivity, and full-intelligence era. ## About OpenHarmony
The open-source code repositories are available at [https://openharmony.gitee.com](https://openharmony.gitee.com). OpenHarmony is an open source project launched by the OpenAtom Foundation. The purpose of this project is to build an open, distributed operating system (OS) framework for smart IoT devices in the full-scenario, full-connectivity, and full-intelligence era.
## Overview of Source Code Acquisition<a name="section12763342204"></a>
The OpenHarmony source code is open to you as [HPM parts](../hpm-part/hpm-part-about.md), which can be obtained in any of the following ways:
- **Method 1**: Acquire the source code from the Gitee code repository. You can use the **repo** or **git** tool to download the latest code from the code repository. The open source code repositories are available at [https://openharmony.gitee.com](https://openharmony.gitee.com).
- **Method 2**: Acquire the source code from [DevEco Marketplace](https://repo.harmonyos.com/#/en/home). Visit [DevEco Marketplace](https://repo.harmonyos.com/#/en/home), search for your desired open-source distribution, and download the bundle list \(or customize bundles and download the bundle list\). Then use the **hpm-cli** tool to download and install the bundles and compilation toolchain on your local PC.
- **Method 3**: Download the compressed file of a distribution from a mirror site. This method provides a fast download speed, so you can also use this method for obtaining the source code of an earlier version.
- **Method 4**: Acquire the source code from the GitHub image repository. You can use the **repo** or **git** tool to download the latest code from the code repository.
## Method 1: Acquiring Source Code from the Gitee Code Repository<a name="section537312010229"></a>
### When to Use<a name="section10881513459"></a> ## Overview of Source Code Acquisition
- You want to establish a baseline based on stable OpenHarmony releases and distribute the baseline to your customers. This document describes how to acquire OpenHarmony source code and provides its directory structure. The OpenHarmony source code is open to you as [HPM parts](../hpm-part/hpm-part-about.md), which can be obtained in any of the following ways:
- You have interconnected your software with OpenHarmony and need official certification from OpenHarmony. - **Method 1**: Acquire the source code from the Gitee code repository. You can use the **repo** or **git** tool to download the latest code from the code repository.
- You want to contribute code to the OpenHarmony community after obtaining official OpenHarmony certification for chips, modules, and applications. - **Method 2**: Acquire the source code from [DevEco Marketplace](https://repo.harmonyos.com/#/en/home). Visit [DevEco Marketplace](https://repo.harmonyos.com/#/en/home), search for your desired open source distribution, and download the component list (or customize components and download the component list). Then use the **hpm-cli** tool to download and install the components and compilation toolchain on your local PC.
- You need to rectify OpenHarmony issues. - **Method 3**: Download the compressed file of a distribution from a mirror site. This method provides a fast download speed, so you can also use this method for obtaining the source code of an earlier version.
- You want to learn OpenHarmony source code. - **Method 4**: Acquire the source code from the GitHub image repository. You can use the **repo** or **git** tool to download the latest code from the code repository.
### Prerequisites<a name="section102871547153314"></a> ## Method 1: Acquiring Source Code from the Gitee Code Repository
1. Register your account with Gitee.
2. Register an SSH public key for access to Gitee.
3. Install the [git client](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) and [git-lfs](https://gitee.com/vcs-all-in-one/git-lfs?_from=gitee_search#downloading), and configure basic user information.
```shell ### When to Use
git config --global user.name "yourname"
git config --global user.email "your-email-address"
git config --global credential.helper store
```
4. Run the following commands to install the **repo** tool: - You want to establish a baseline based on stable OpenHarmony releases and distribute the baseline to your customers.
- You have interconnected your software with OpenHarmony and need official certification from OpenHarmony.
- You want to contribute code to the OpenHarmony community after obtaining official OpenHarmony certification for chips, modules, and applications.
- You need to rectify OpenHarmony issues.
- You want to learn OpenHarmony source code.
### Prerequisites
1. Register your account with Gitee.
2. Register an SSH public key for access to Gitee.
3. Install the [Git client](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) and [git-lfs](https://gitee.com/vcs-all-in-one/git-lfs?_from=gitee_search#downloading), and configure basic user information.
```shell
git config --global user.name "yourname"
git config --global user.email "your-email-address"
git config --global credential.helper store
```
4. Install the **repo** tool:
In this example, **~/bin** is used as an example installation directory. You can change the directory as needed. In this example, **~/bin** is used as an example installation directory. You can change the directory as needed.
...@@ -62,115 +73,122 @@ The OpenHarmony source code is open to you as [HPM parts](../hpm-part/hpm-part-a ...@@ -62,115 +73,122 @@ The OpenHarmony source code is open to you as [HPM parts](../hpm-part/hpm-part-a
``` ```
### How to Use<a name="section429012478331"></a> ### Procedure
>![](../public_sys-resources/icon-note.gif) **NOTE** > **NOTE**<br>
> >
>Download the release code, which is more stable, if you want to develop commercial functionalities. Download the master code if you want to get quick access to the latest features for your development. > Download the release code, which is more stable, if you want to develop commercial functionalities. Download the master code if you want to get quick access to the latest features for your development.
- **Obtaining OpenHarmony release code** - **Obtaining OpenHarmony release code**
For details about how to obtain the source code of an OpenHarmony release, see the [Release Notes](../../release-notes/Readme.md). For details about how to obtain the source code of an OpenHarmony release, see the [Release Notes](../../release-notes/Readme.md).
- **Obtaining OpenHarmony master code** - **Obtaining OpenHarmony master code**
Method 1 (recommended): Use the **repo** tool to download the source code over SSH. (You must have registered an SSH public key for access to Gitee.)
```shell
repo init -u git@gitee.com:openharmony/manifest.git -b master --no-repo-verify
repo sync -c
repo forall -c 'git lfs pull'
```
Method 1 \(recommended\): Use the **repo** tool to download the source code over SSH. \(You must have registered an SSH public key for access to Gitee.\) Method 2: Use the **repo** tool to download the source code over HTTPS.
```shell
repo init -u git@gitee.com:openharmony/manifest.git -b master --no-repo-verify ```shell
repo sync -c repo init -u https://gitee.com/openharmony/manifest.git -b master --no-repo-verify
repo forall -c 'git lfs pull' repo sync -c
``` repo forall -c 'git lfs pull'
```
Method 2: Use the **repo** tool to download the source code over HTTPS.
```shell ## Method 2: Acquiring Source Code from DevEco Marketplace
repo init -u https://gitee.com/openharmony/manifest.git -b master --no-repo-verify
repo sync -c
repo forall -c 'git lfs pull'
```
## Method 2: Acquiring Source Code from DevEco Marketplace<a name="section463013147412"></a> ### When to Use
### When to Use<a name="section26661067443"></a> If OpenHarmony is new to you, sample solutions are helpful to your development. You can obtain an open source distribution from [DevEco Marketplace](https://repo.harmonyos.com/#/en/home), or customize a distribution by adding or deleting components of an open source distribution. Then use the **hpm-cli** tool to download and install the components and compilation toolchain on your local PC.
If OpenHarmony is new to you, sample solutions are helpful to your development. You can obtain an open-source distribution from [DevEco Marketplace](https://repo.harmonyos.com/#/en/home), or customize a distribution by adding or deleting bundles of an open-source distribution. Then use the **hpm-cli** tool to download and install the bundles and compilation toolchain on your local PC.
### Prerequisites<a name="section17544943123315"></a> ### Prerequisites
You must install **Node.js** and HPM on your local PC. The installation procedure is as follows: You must install **Node.js** and HPM on your local PC. The installation procedure is as follows:
1. Install **Node.js**. 1. Install **Node.js**.
Download **Node.js** from its official website and install it on your local PC. Download **Node.js** from its official website, and install it on your local PC.
The [Node.js](https://nodejs.org/) version must be 12.x \(including npm 6.14.4\) or later. An LTS version is recommended. The [Node.js](https://nodejs.org/) version must be 12.x (including npm 6.14.4) or later. An LTS version is recommended.
2. Install the **hpm-cli** tool using **npm** delivered with **Node.js**. 2. Install the **hpm-cli** tool using **npm** delivered with **Node.js**.
Open the CMD window and run the following command: Open the CMD window, and run the following command:
```shell ```shell
npm install -g @ohos/hpm-cli npm install -g @ohos/hpm-cli
``` ```
3. Run the following command to check whether the installation is successful. If the HPM version is displayed, the installation is successful. 3. Run the following command to check whether the installation is successful. If the HPM version is displayed, the installation is successful.
```shell
hpm -V or hpm --version
```
```shell 4. Upgrade the HPM version as needed.
hpm -V or hpm --version
``` ```shell
npm update -g @ohos/hpm-cli
```
4. Run the following command to upgrade the HPM version:
```shell ### Procedure
npm update -g @ohos/hpm-cli
```
1. Search for distributions.
1. Access [DevEco Marketplace](https://repo.harmonyos.com/#/en/home), and click **Device**. Then go to the **Open Source Distribution** page.
2. Enter a keyword, for example, **camera**, in the search box. All matched distributions are displayed.
3. Specify filter criteria, such as the OS, board, and kernel, to further filter the distributions.
4. Find your desired distribution, and click it to view details.
### How to Use<a name="section954619433333"></a> **Figure 1** HPM page
1. Search for distributions. ![](figures/hpm-page.png "hpm-page")
1. Access [DevEco Marketplace](https://repo.harmonyos.com/#/en/home), and click **Device**. Then go to the **Open Source Distribution** page.
2. Enter a keyword \(for example: **camera**\) in the search box. All matched distributions are found.
3. Specify filter criteria, such as the bundle type \(for example: **Board Support** or **Kernel Support**\), to further filter the distributions.
4. Find your desired distribution and click it to view details.
**Figure 1** HPM page<a name="fig349416264520"></a> 2. Learn more about the distribution.
![](figures/hpm-page.png "hpm-page") 1. Read carefully the information about the distribution to learn its application scenarios, features, components, usage, and customization methods.
2. Click **Download** if you want to download the distribution to your local PC.
3. Click **Device component tailoring** if you want to add or delete components of the distribution.
2. Learn more about the distribution. **Figure 2** Example distribution
1. Read carefully the information about the distribution to learn its application scenarios, features, bundles, usage, and customization methods, as shown in the following figure. ![](figures/example-distribution.png "example-distribution")
2. Click **Download** if you want to download the distribution to your local PC.
3. Click **Custom** if you want to add or delete bundles of the distribution.
**Figure 2** Example distribution<a name="fig142484411121"></a> 3. Customize components.
![](figures/example-distribution.png "example-distribution") 1. Access the **Device Component Tailoring** page.
2. Add or delete components.
- In the **Customizable Components** pane, click the plus sign. In the displayed dialog box, add required components.
- In the **Customizable Components** pane, click the minus sign next to a component to delete it.
3. Enter the basic information about your project, including the name, version, and description, on the right pane.
4. Click **Download**. The system generates the OpenHarmony code structure file (for example, **my_cust_dist.zip**) and saves it to your local PC.
3. Customize bundles. **Figure 3** Customizing components
1. Access the **Custom solution** page, as shown in the following figure.
2. Set the toggle switch next to a specific optional bundle to delete it from the distribution, or click **Add bundle** to add new bundles.
3. Enter the basic information about your project, including the bundle name, version, and description, on the right pane.
4. Click **Download**. The system generates the OpenHarmony code structure file \(for example, **my\_cust\_dist.zip**\) and saves it to your local PC.
**Figure 3** Customizing bundles<a name="fig1256020372197"></a> ![](figures/customizing-bundles.png "customizing-bundles")
![](figures/customizing-bundles.png "customizing-bundles")
4. Install bundles. 4. Install components.
1. Decompress the downloaded code structure file using CMD on Windows (or shell in Linux). 1. Decompress the downloaded code structure file using CMD on Windows (or shell in Linux).
2. In the generated directory, run the **hpm install** command to download and install bundles. If the **Install successful** message is displayed, the command has been executed successfully. 2. In the generated directory, run the **hpm install** command to download and install components. If the **Install successful** message is displayed, the command has been executed successfully.
3. Obtain the bundles. The downloaded bundles are stored in the **ohos\_bundles** folder under the project directory. \(The source code of some bundles will be copied to a specified directory after the bundles are installed.\) 3. The downloaded components will be stored in the **ohos_bundles** folder under the project directory. (The source code of some components will be copied to a specified directory after the components are installed.)
## Method 3: Acquiring Source Code from a Mirror Site<a name="section1186691118430"></a> ## Method 3: Acquiring Source Code from a Mirror Site
To ensure the download performance, you are advised to download the source code or the corresponding solution from the image library of the respective site listed in the table below. To ensure the download performance, you are advised to download the source code or the corresponding solution from the image library of the respective site listed in the table below.
The table below provides only the sites for downloading the latest OpenHarmony LTS code. For details about how to obtain the source code of earlier versions, see the [Release Notes](../../release-notes/Readme.md). The table below provides only the sites for downloading the latest OpenHarmony LTS code. For details about how to obtain the source code of earlier versions, see the [Release Notes](../../release-notes/Readme.md).
**Table 1** Sites for acquiring source code **Table 1** Sites for acquiring source code
| **LTS Code**| **Version**| **Site**| **SHA-256 Checksum**| **Software Package Size**| | **LTS Code**| **Version**| **Site**| **SHA-256 Checksum**| **Software Package Size**|
| -------- | -------- | -------- | -------- | -------- | | -------- | -------- | -------- | -------- | -------- |
...@@ -191,13 +209,14 @@ The table below provides only the sites for downloading the latest OpenHarmony L ...@@ -191,13 +209,14 @@ The table below provides only the sites for downloading the latest OpenHarmony L
| **Compiler Toolchain**| **Version**| **Site**| **SHA-256 Checksum**| **Software Package Size**| | **Compiler Toolchain**| **Version**| **Site**| **SHA-256 Checksum**| **Software Package Size**|
| Compiler toolchain| - | [Download](https://repo.huaweicloud.com/openharmony/os/2.0/tool_chain/)| - | - | | Compiler toolchain| - | [Download](https://repo.huaweicloud.com/openharmony/os/2.0/tool_chain/)| - | - |
## Method 4: Acquiring Source Code from the GitHub Image Repository<a name="section23448418360"></a>
>![](../public_sys-resources/icon-note.gif) **NOTE** ## Method 4: Acquiring Source Code from the GitHub Image Repository
>
> **NOTE**<br>
> The image repository is synchronized at 23:00 (UTC +8:00) every day. > The image repository is synchronized at 23:00 (UTC +8:00) every day.
Method 1 \(recommended\): Use the **repo** tool to download the source code over SSH. \(You must have registered an SSH public key for access to GitHub. For details, see [Adding a new SSH key to your GitHub account](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account).\)
Method 1 (recommended): Use the **repo** tool to download the source code over SSH. (You must have registered an SSH public key for access to GitHub. For details, see [Adding a new SSH key to your GitHub account](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account).)
```shell ```shell
repo init -u git@github.com:openharmony/manifest.git -b master --no-repo-verify repo init -u git@github.com:openharmony/manifest.git -b master --no-repo-verify
...@@ -207,23 +226,25 @@ repo forall -c 'git lfs pull' ...@@ -207,23 +226,25 @@ repo forall -c 'git lfs pull'
Method 2: Use the **repo** tool to download the source code over HTTPS. Method 2: Use the **repo** tool to download the source code over HTTPS.
```shell ```shell
repo init -u https://github.com/openharmony/manifest.git -b master --no-repo-verify repo init -u https://github.com/openharmony/manifest.git -b master --no-repo-verify
repo sync -c repo sync -c
repo forall -c 'git lfs pull' repo forall -c 'git lfs pull'
``` ```
## Source Code Directories<a name="section1072115612811"></a>
## Source Code Directories
The following table describes the OpenHarmony source code directories. The following table describes the OpenHarmony source code directories.
**Table 2** Source code directories **Table 2** Source code directories
| Directory| Description| | **Directory**| **Description**|
| -------- | -------- | | -------- | -------- |
| applications | Application samples, for example, **camera**.| | applications | Application samples, for example, **camera**.|
| base | Basic software service subsystem set and hardware service subsystem set.| | base | Basic software service subsystem set and hardware service subsystem set.|
| build | Bundle-based compilation, building, and configuration scripts.| | build | Component-based compilation, building, and configuration scripts.|
| docs | Reference documents.| | docs | Reference documents.|
| domains | Enhanced software service subsystem set.| | domains | Enhanced software service subsystem set.|
| drivers | Driver subsystem.| | drivers | Driver subsystem.|
...@@ -231,9 +252,7 @@ The following table describes the OpenHarmony source code directories. ...@@ -231,9 +252,7 @@ The following table describes the OpenHarmony source code directories.
| kernel | Kernel subsystem.| | kernel | Kernel subsystem.|
| prebuilts | Compiler and tool chain subsystem.| | prebuilts | Compiler and tool chain subsystem.|
| test | Test subsystem.| | test | Test subsystem.|
| third_party | Open-source third-party software.| | third_party | Open source third-party software.|
| utils | Commonly used development utilities.| | utils | Commonly used development utilities.|
| vendor | Vendor-provided software.| | vendor | Vendor-provided software.|
| build.py | Build script file.| | build.py | Build script file.|
<!--no_check-->
# Telephony Subsystem Changelog
## cl.telephony.radio.1 isNrSupported API Change
NR is a proper noun and must be capitalized.
You need to adapt your application.
**Change Impact**
The JS API needs to be adapted for applications developed based on earlier versions. Otherwise, relevant functions will be affected.
**Key API/Component Changes**
- Involved APIs:
isNrSupported(): boolean;
isNrSupported(slotId: number): boolean;
- Before change:
```js
function isNrSupported(): boolean;
function isNrSupported(slotId: number): boolean;
```
- After change:
```js
function isNRSupported(): boolean;
function isNRSupported(slotId: number): boolean;
```
**Adaptation Guide**
Use the new API. The sample code is as follows:
```js
let result = radio.isNrSupported();
console.log("Result: "+ result);
```
```js
let slotId = 0;
let result = radio.isNRSupported(slotId);
console.log("Result: "+ result);
```
## cl.telephony.call.2 dial API Change
Changed the `dial` API to the `dialCall` API in the call module of the telephony subsystem since API version 9.
You need to adapt your application.
**Change Impact**
The `dial` API is deprecated and cannot be used anymore. Use the `dialCall` API instead. Otherwise, relevant functions will be affected.
**Key API/Component Changes**
- Involved APIs:
dial(phoneNumber: string, callback: AsyncCallback<boolean>): void;
dial(phoneNumber: string, options: DialOptions, callback: AsyncCallback<boolean>): void;
dial(phoneNumber: string, options?: DialOptions): Promise<boolean>;
- Before change:
```js
function dial(phoneNumber: string, callback: AsyncCallback<boolean>): void;
function dial(phoneNumber: string, options: DialOptions, callback: AsyncCallback<boolean>): void;
function dial(phoneNumber: string, options?: DialOptions): Promise<boolean>;
```
- After change:
```js
function dialCall(phoneNumber: string, callback: AsyncCallback<void>): void;
function dialCall(phoneNumber: string, options: DialCallOptions, callback: AsyncCallback<void>): void;
function dialCall(phoneNumber: string, options?: DialCallOptions): Promise<void>;
```
**Adaptation Guide**
The `dial` API is deprecated and cannot be used anymore. Use the `dialCall` API instead.
Use the new API. The sample code is as follows:
```js
call.dialCall("138xxxxxxxx", (err, data) => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`);
});
```
```js
call.dialCall("138xxxxxxxx", {
accountId: 0,
videoState: 0,
dialScene: 0,
dialType: 0,
}, (err, data) => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`);
});
```
```js
try {
call.dialCall('138xxxxxxxx');
console.log(`dialCall success, promise: data->${JSON.stringify(data)}`);
} catch (error) {
console.log(`dialCall fail, promise: err->${JSON.stringify(error)}`);
}
```
# Globalization Subsystem Changelog
## cl.resourceManager.1 Addition of getStringSync and getStringByNameSync APIs
Added the **getStringSync** and **getStringByNameSync** APIs and error codes to obtain and format strings.
| Bundle Name | API |
| --------------- | ---------------------------------------------------- |
| ohos.resourceManager.d.ts | getStringSync(resId: number, ...args: Array<string \| number>): string; |
| ohos.resourceManager.d.ts | getStringSync(resource: Resource, ...args: Array<string \| number>): string; |
| ohos.resourceManager.d.ts | getStringByNameSync(resName: string, ...args: Array<string \| number>): string; |
**Change Impact**
In versions earlier than 4.0.6.2, only the values of string resources can be directly obtained. In 4.0.6.2 or later, the values of string resources can be formatted based on the input arguments for enhanced query.
The following error codes are added:
9001007 If the resource obtained by resId formatting error.
9001008 If the resource obtained by resName formatting error.
**Sample Code**
The following uses **getStringSync** as an example. Before the change, only example 1 is supported. After the change, both example 1 and example 2 are supported.
```
Example 1:
try {
this.context.resourceManager.getStringSync($r('app.string.test').id);
} catch (error) {
console.error(`getStringSync failed, error code: ${error.code}, message: ${error.message}.`)
}
Example 2:
try {
this.context.resourceManager.getStringSync($r('app.string.test').id, "format string", 787, 98.78);
} catch (error) {
console.error(`getStringSync failed, error code: ${error.code}, message: ${error.message}.`)
}
```
**Adaptation Guide**
For details, see the API reference.
[API Reference](../../../application-dev/reference/apis/js-apis-resource-manager.md)
[Error Codes](../../../application-dev/reference/errorcodes/errorcode-resource-manager.md)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册