提交 519fc4cb 编写于 作者: H Hollokin 提交者: Gitee

Merge branch 'master' of gitee.com:openharmony/docs into master

Signed-off-by: NHollokin <taoyuxin2@huawei.com>
......@@ -129,8 +129,10 @@ zh-cn/device-dev/subsystems/subsys-toolchain-hdc-guide.md @Austin23
zh-cn/device-dev/subsystems/subsys-toolchain-hiperf.md @Austin23
zh-cn/device-dev/subsystems/subsys-xts-guide.md @Austin23
zh-cn/application-dev/ability/ @RayShih
zh-cn/application-dev/ui/ @HelloCrease
zh-cn/application-dev/ability/ @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/IDL/ @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/device-usage-statistics/ @RayShih @shuaytao @wangzhen107 @inter515
zh-cn/application-dev/ui/ @HelloCrease @qieqiewl @tomatodevboy @niulihua
zh-cn/application-dev/notification/ @RayShih
zh-cn/application-dev/windowmanager/ @ge-yafang
zh-cn/application-dev/webgl/ @ge-yafang
......@@ -445,8 +447,19 @@ zh-cn/application-dev/reference/apis/js-apis-formextensioncontext.md @RayShih @l
zh-cn/application-dev/reference/apis/js-apis-formhost.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/apis/js-apis-formInfo.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/apis/js-apis-formprovider.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/apis/js-apis-inputmethod-extension-ability.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/apis/js-apis-inputmethod-extension-context.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/apis/js-apis-inputmethod-extension-ability.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-inputmethod-extension-context.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-inputmethod-subtype.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-inputmethod-framework.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-usb.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-datashare.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-colorspace-manager.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-display.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-distributed-data_object.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-distributedKVStore.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-pasteboard.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-preferences.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-window.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-application-quickFixManager.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/apis/js-apis-missionManager.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/apis/js-apis-particleAbility.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
......
# Legal Notices
**Copyright (c) 2020-2022 OpenAtom OpenHarmony. All rights reserved.**
## Copyright
All copyrights of the OpenHarmony documents are reserved by OpenAtom OpenHarmony.
The OpenHarmony documents are licensed under Creative Commons Attribution 4.0 International (CC BY 4.0). For easier understanding, you can visit [Creative Commons](https://creativecommons.org/licenses/by/4.0/) to get a human-readable summary of the license. For the complete content, see [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/legalcode).
## Trademarks and Permissions
No content provided in the OpenHarmony documentation shall be deemed as a grant of the approval or right to use any trademark, name, or logo of the OpenAtom Foundation and OpenAtom OpenHarmony. No third parties shall use any of the aforementioned trademarks, names, or logos in any way without explicit prior written permission of the OpenAtom Foundation.
## Disclaimer
The information in the OpenHarmony documents is subject to change without notice.
The OpenHarmony documents are provided without any express or implied warranty. In any case, the OpenAtom Foundation or the copyright owner is not liable for any direct or indirect loss arising from the use of the OpenHarmony documents, regardless of the cause or legal theory, even if the OpenHarmony documents have stated that there is a possibility of such loss.
<!--no_check-->
\ No newline at end of file
......@@ -150,7 +150,7 @@ Currently, the OpenHarmony community supports 17 types of development boards, wh
| System Type| Board Model| Chip Model| <div style="width:200pt">Function Description and Use Case</div> | Application Scenario| Code Repository |
| -------- | -------- | -------- | -------- | -------- | -------- |
| Standard system| Runhe HH-SCDAYU200| RK3568 | <div style="width:200pt">Function description:<br>Bolstered by the Rockchip RK3568, the HH-SCDAYU200 development board integrates the dual-core GPU and efficient NPU. Its quad-core 64-bit Cortex-A55 processor uses the advanced 22 nm fabrication process and is clocked at up to 2.0 GHz. The board is packed with Bluetooth, Wi-Fi, audio, video, and camera features, with a wide range of expansion ports to accommodate various video input and outputs. It comes with dual GE auto-sensing RJ45 ports, so it can be used in multi-connectivity products, such as network video recorders (NVRs) and industrial gateways.<br>Use case:<br>[DAYU200 Use Case](device-dev/porting/porting-dayu200-on_standard-demo.md)</div> | Entertainment, easy travel, and smart home, such as kitchen hoods, ovens, and treadmills.| [device_soc_rockchip](https://gitee.com/openharmony/device_soc_rockchip)<br>[device_board_hihope](https://gitee.com/openharmony/device_board_hihope)<br>[vendor_hihope](https://gitee.com/openharmony/vendor_hihope) <br> |
| Standard system| Runhe HH-SCDAYU200| RK3568 | <div style="width:200pt">Function description:<br>Bolstered by the Rockchip RK3568, the HH-SCDAYU200 development board integrates the dual-core GPU and efficient NPU. Its quad-core 64-bit Cortex-A55 processor uses the advanced 22 nm fabrication process and is clocked at up to 2.0 GHz. The board is packed with Bluetooth, Wi-Fi, audio, video, and camera features, with a wide range of expansion ports to accommodate various video input and outputs. It comes with dual GE auto-sensing RJ45 ports, so it can be used in multi-connectivity products, such as network video recorders (NVRs) and industrial gateways.</div> | Entertainment, easy travel, and smart home, such as kitchen hoods, ovens, and treadmills.| [device_soc_rockchip](https://gitee.com/openharmony/device_soc_rockchip)<br>[device_board_hihope](https://gitee.com/openharmony/device_board_hihope)<br>[vendor_hihope](https://gitee.com/openharmony/vendor_hihope) <br> |
| Small system| Hispark_Taurus | Hi3516DV300 | <div style="width:200pt">Function Description:<br>Hi3516D V300 is the next-generation system on chip (SoC) for smart HD IP cameras. It integrates the next-generation image signal processor (ISP), H.265 video compression encoder, and high-performance NNIE engine, and delivers high performance in terms of low bit rate, high image quality, intelligent processing and analysis, and low power consumption.</div> | Smart device with screens, such as refrigerators with screens and head units.| [device_soc_hisilicon](https://gitee.com/openharmony/device_soc_hisilicon)<br>[device_board_hisilicon](https://gitee.com/openharmony/device_board_hisilicon)<br>[vendor_hisilicon](https://gitee.com/openharmony/vendor_hisilicon) <br> |
| Mini system| Multi-modal V200Z-R | BES2600 | <div style="width:200pt">Function description:<br>The multi-modal V200Z-R development board is a high-performance, multi-functional, and cost-effective AIoT SoC powered by the BES2600WM chip of Bestechnic. It integrates a quad-core ARM processor with a frequency of up to 1 GHz as well as dual-mode Wi-Fi and dual-mode Bluetooth. The board supports the 802.11 a/b/g/n/ and BT/BLE 5.2 standards. It is able to accommodate RAM of up to 42 MB and flash memory of up to 32 MB, and supports the MIPI display serial interface (DSI) and camera serial interface (CSI). It is applicable to various AIoT multi-modal VUI and GUI interaction scenarios.<br>Use case:<br>[Multi-modal V200Z-R Use Case](device-dev/porting/porting-bes2600w-on-minisystem-display-demo.md)</div> | Smart hardware, and smart devices with screens, such as speakers and watches.| [device_soc_bestechnic](https://gitee.com/openharmony/device_soc_bestechnic)<br>[device_board_fnlink](https://gitee.com/openharmony/device_board_fnlink)<br>[vendor_bestechnic](https://gitee.com/openharmony/vendor_bestechnic) <br> |
......
......@@ -7,7 +7,7 @@ To ensure successful communications between the client and server, interfaces re
![IDL-interface-description](./figures/IDL-interface-description.png)
IDL provides the following functions:
**IDL provides the following functions:**
- Declares interfaces provided by system services for external systems, and based on the interface declaration, generates C, C++, JS, or TS code for inter-process communication (IPC) or remote procedure call (RPC) proxies and stubs during compilation.
......@@ -17,7 +17,7 @@ IDL provides the following functions:
![IPC-RPC-communication-model](./figures/IPC-RPC-communication-model.png)
IDL has the following advantages:
**IDL has the following advantages:**
- Services are defined in the form of interfaces in IDL. Therefore, you do not need to focus on implementation details.
......@@ -433,7 +433,7 @@ export default {
console.log('ServiceAbility want:' + JSON.stringify(want));
console.log('ServiceAbility want name:' + want.bundleName)
} catch(err) {
console.log("ServiceAbility error:" + err)
console.log('ServiceAbility error:' + err)
}
console.info('ServiceAbility onConnect end');
return new IdlTestImp('connect');
......@@ -455,13 +455,13 @@ import featureAbility from '@ohos.ability.featureAbility';
function callbackTestIntTransaction(result: number, ret: number): void {
if (result == 0 && ret == 124) {
console.log("case 1 success ");
console.log('case 1 success');
}
}
function callbackTestStringTransaction(result: number): void {
if (result == 0) {
console.log("case 2 success ");
console.log('case 2 success');
}
}
......@@ -472,17 +472,17 @@ var onAbilityConnectDone = {
testProxy.testStringTransaction('hello', callbackTestStringTransaction);
},
onDisconnect:function (elementName) {
console.log("onDisconnectService onDisconnect");
console.log('onDisconnectService onDisconnect');
},
onFailed:function (code) {
console.log("onDisconnectService onFailed");
console.log('onDisconnectService onFailed');
}
};
function connectAbility: void {
let want = {
"bundleName":"com.example.myapplicationidl",
"abilityName": "com.example.myapplicationidl.ServiceAbility"
bundleName: 'com.example.myapplicationidl',
abilityName: 'com.example.myapplicationidl.ServiceAbility'
};
let connectionId = -1;
connectionId = featureAbility.connectAbility(want, onAbilityConnectDone);
......@@ -495,7 +495,7 @@ function connectAbility: void {
You can send a class from one process to another through IPC interfaces. However, you must ensure that the peer can use the code of this class and this class supports the **marshalling** and **unmarshalling** methods. OpenHarmony uses **marshalling** and **unmarshalling** to serialize and deserialize objects into objects that can be identified by each process.
To create a class that supports the sequenceable type, perform the following operations:
**To create a class that supports the sequenceable type, perform the following operations:**
1. Implement the **marshalling** method, which obtains the current state of the object and serializes the object into a **Parcel** object.
2. Implement the **unmarshalling** method, which deserializes the object from a **Parcel** object.
......@@ -595,7 +595,7 @@ export default class IdlTestServiceProxy implements IIdlTestService {
let _reply = new rpc.MessageParcel();
_data.writeInt(data);
this.proxy.sendRequest(IdlTestServiceProxy.COMMAND_TEST_INT_TRANSACTION, _data, _reply, _option).then(function(result) {
if (result.errCode === 0) {
if (result.errCode == 0) {
let _errCode = result.reply.readInt();
if (_errCode != 0) {
let _returnValue = undefined;
......@@ -605,7 +605,7 @@ export default class IdlTestServiceProxy implements IIdlTestService {
let _returnValue = result.reply.readInt();
callback(_errCode, _returnValue);
} else {
console.log("sendRequest failed, errCode: " + result.errCode);
console.log('sendRequest failed, errCode: ' + result.errCode);
}
})
}
......@@ -617,11 +617,11 @@ export default class IdlTestServiceProxy implements IIdlTestService {
let _reply = new rpc.MessageParcel();
_data.writeString(data);
this.proxy.sendRequest(IdlTestServiceProxy.COMMAND_TEST_STRING_TRANSACTION, _data, _reply, _option).then(function(result) {
if (result.errCode === 0) {
if (result.errCode == 0) {
let _errCode = result.reply.readInt();
callback(_errCode);
} else {
console.log("sendRequest failed, errCode: " + result.errCode);
console.log('sendRequest failed, errCode: ' + result.errCode);
}
})
}
......@@ -644,12 +644,12 @@ import nativeMgr from 'nativeManager';
function testIntTransactionCallback(errCode: number, returnValue: number)
{
console.log("errCode: " + errCode + " returnValue: " + returnValue);
console.log('errCode: ' + errCode + ' returnValue: ' + returnValue);
}
function testStringTransactionCallback(errCode: number)
{
console.log("errCode: " + errCode);
console.log('errCode: ' + errCode);
}
function jsProxyTriggerCppStub()
......@@ -660,6 +660,6 @@ function jsProxyTriggerCppStub()
tsProxy.testIntTransaction(10, testIntTransactionCallback);
// Call testStringTransaction.
tsProxy.testStringTransaction("test", testIntTransactionCallback);
tsProxy.testStringTransaction('test', testIntTransactionCallback);
}
```
......@@ -8,14 +8,13 @@
- Quick Start
- Getting Started
- [Preparations](quick-start/start-overview.md)
- [Getting Started with eTS in Stage Model](quick-start/start-with-ets-stage.md)
- [Getting Started with eTS in FA Model](quick-start/start-with-ets-fa.md)
- [Getting Started with ArkTS in Stage Model](quick-start/start-with-ets-stage.md)
- [Getting Started with ArkTS in FA Model](quick-start/start-with-ets-fa.md)
- [Getting Started with JavaScript in FA Model](quick-start/start-with-js-fa.md)
- Development Fundamentals
- [Application Package Structure Configuration File (FA Model)](quick-start/package-structure.md)
- [Application Package Structure Configuration File (Stage Model)](quick-start/stage-structure.md)
- [SysCap](quick-start/syscap.md)
- [HarmonyAppProvision Configuration File](quick-start/app-provision-structure.md)
- Development
- [Ability Development](ability/Readme-EN.md)
......
# Ability Development
- [Ability Framework Overview](ability-brief.md)
- [Context Usage](context-userguide.md)
- FA Model
......@@ -19,5 +20,3 @@
- [Ability Assistant Usage](ability-assistant-guidelines.md)
- [ContinuationManager Development](continuationmanager.md)
- [Test Framework Usage](ability-delegator.md)
......@@ -14,20 +14,20 @@ As the entry of the ability continuation capability, **continuationManager** is
## Available APIs
| API | Description|
| ---------------------------------------------------------------------------------------------- | ----------- |
| register(callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API does not involve any filter parameters and uses an asynchronous callback to return the result.|
| register(options: ContinuationExtraParams, callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API uses an asynchronous callback to return the result.|
| register(options?: ContinuationExtraParams): Promise\<number> | Registers the continuation management service and obtains a token. This API uses a promise to return the result.|
| on(type: "deviceConnect", token: number, callback: Callback\<Array\<ContinuationResult>>): void | Subscribes to device connection events. This API uses an asynchronous callback to return the result.|
| on(type: "deviceDisconnect", token: number, callback: Callback\<Array\<string>>): void | Subscribes to device disconnection events. This API uses an asynchronous callback to return the result.|
| off(type: "deviceConnect", token: number): void | Unsubscribes from device connection events.|
| off(type: "deviceDisconnect", token: number): void | Unsubscribes from device disconnection events.|
| startDeviceManager(token: number, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API does not involve any filter parameters and uses an asynchronous callback to return the result.|
| startDeviceManager(token: number, options: ContinuationExtraParams, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API uses an asynchronous callback to return the result.|
| startDeviceManager(token: number, options?: ContinuationExtraParams): Promise\<void> | Starts the device selection module to show the list of available devices. This API uses a promise to return the result.|
| updateConnectStatus(token: number, deviceId: string, status: DeviceConnectState, callback: AsyncCallback\<void>): void | Instructs the device selection module to update the device connection state. This API uses an asynchronous callback to return the result.|
| updateConnectStatus(token: number, deviceId: string, status: DeviceConnectState): Promise\<void> | Instructs the device selection module to update the device connection state. This API uses a promise to return the result.|
| unregister(token: number, callback: AsyncCallback\<void>): void | Deregisters the continuation management service. This API uses an asynchronous callback to return the result.|
| unregister(token: number): Promise\<void> | Deregisters the continuation management service. This API uses a promise to return the result.|
| registerContinuation(callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API does not involve any filter parameters and uses an asynchronous callback to return the result.|
| registerContinuation(options: ContinuationExtraParams, callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API uses an asynchronous callback to return the result.|
| registerContinuation(options?: ContinuationExtraParams): Promise\<number> | Registers the continuation management service and obtains a token. This API uses a promise to return the result.|
| on(type: "deviceSelected", token: number, callback: Callback\<Array\<ContinuationResult>>): void | Subscribes to device connection events. This API uses an asynchronous callback to return the result.|
| on(type: "deviceUnselected", token: number, callback: Callback\<Array\<ContinuationResult>>): void | Subscribes to device disconnection events. This API uses an asynchronous callback to return the result.|
| off(type: "deviceSelected", token: number): void | Unsubscribes from device connection events.|
| off(type: "deviceUnselected", token: number): void | Unsubscribes from device disconnection events.|
| startContinuationDeviceManager(token: number, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API does not involve any filter parameters and uses an asynchronous callback to return the result.|
| startContinuationDeviceManager(token: number, options: ContinuationExtraParams, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API uses an asynchronous callback to return the result.|
| startContinuationDeviceManager(token: number, options?: ContinuationExtraParams): Promise\<void> | Starts the device selection module to show the list of available devices. This API uses a promise to return the result.|
| updateContinuationState(token: number, deviceId: string, status: DeviceConnectState, callback: AsyncCallback\<void>): void | Instructs the device selection module to update the device connection state. This API uses an asynchronous callback to return the result.|
| updateContinuationState(token: number, deviceId: string, status: DeviceConnectState): Promise\<void> | Instructs the device selection module to update the device connection state. This API uses a promise to return the result.|
| unregisterContinuation(token: number, callback: AsyncCallback\<void>): void | Deregisters the continuation management service. This API uses an asynchronous callback to return the result.|
| unregisterContinuation(token: number): Promise\<void> | Deregisters the continuation management service. This API uses a promise to return the result.|
## How to Develop
1. Import the **continuationManager** module.
......@@ -36,7 +36,7 @@ As the entry of the ability continuation capability, **continuationManager** is
import continuationManager from '@ohos.continuation.continuationManager';
```
2. Apply for permissions required for cross-device continuation or collaboration operations.
2. Apply for the **DISTRIBUTED_DATASYNC** permission.
The permission application operation varies according to the ability model in use. In the FA mode, add the required permission in the `config.json` file, as follows:
......@@ -57,6 +57,7 @@ As the entry of the ability continuation capability, **continuationManager** is
```ts
import abilityAccessCtrl from "@ohos.abilityAccessCtrl";
import bundle from '@ohos.bundle';
import featureAbility from '@ohos.ability.featureAbility';
async function requestPermission() {
let permissions: Array<string> = [
......@@ -124,7 +125,8 @@ As the entry of the ability continuation capability, **continuationManager** is
// If the permission is not granted, call requestPermissionsFromUser to apply for the permission.
if (needGrantPermission) {
try {
await globalThis.abilityContext.requestPermissionsFromUser(permissions);
// globalThis.context is Ability.context, which must be assigned a value in the MainAbility.ts file in advance.
await globalThis.context.requestPermissionsFromUser(permissions);
} catch (err) {
console.error('app permission request permissions error' + JSON.stringify(err));
}
......@@ -140,13 +142,16 @@ As the entry of the ability continuation capability, **continuationManager** is
```ts
let token: number = -1; // Used to save the token returned after the registration. The token will be used when listening for device connection/disconnection events, starting the device selection module, and updating the device connection state.
continuationManager.register().then((data) => {
console.info('register finished, ' + JSON.stringify(data));
try {
continuationManager.registerContinuation().then((data) => {
console.info('registerContinuation finished, ' + JSON.stringify(data));
token = data; // Obtain a token and assign a value to the token variable.
}).catch((err) => {
console.error('register failed, cause: ' + JSON.stringify(err));
console.error('registerContinuation failed, cause: ' + JSON.stringify(err));
});
} catch (err) {
console.error('registerContinuation failed, cause: ' + JSON.stringify(err));
}
```
4. Listen for the device connection/disconnection state.
......@@ -156,9 +161,10 @@ As the entry of the ability continuation capability, **continuationManager** is
```ts
let remoteDeviceId: string = ""; // Used to save the information about the remote device selected by the user, which will be used for cross-device continuation or collaboration.
try {
// The token parameter is the token obtained during the registration.
continuationManager.on("deviceConnect", token, (continuationResults) => {
console.info('registerDeviceConnectCallback len: ' + continuationResults.length);
continuationManager.on("deviceSelected", token, (continuationResults) => {
console.info('registerDeviceSelectedCallback len: ' + continuationResults.length);
if (continuationResults.length <= 0) {
console.info('no selected device');
return;
......@@ -171,13 +177,15 @@ As the entry of the ability continuation capability, **continuationManager** is
bundleName: 'ohos.samples.continuationmanager',
abilityName: 'MainAbility'
};
// To initiate multi-device collaboration, you must obtain the ohos.permission.DISTRIBUTED_DATASYNC permission.
globalThis.abilityContext.startAbility(want).then((data) => {
console.info('StartRemoteAbility finished, ' + JSON.stringify(data));
}).catch((err) => {
console.error('StartRemoteAbility failed, cause: ' + JSON.stringify(err));
});
});
} catch (err) {
console.error('on failed, cause: ' + JSON.stringify(err));
}
```
The preceding multi-device collaboration operation is performed across devices in the stage model. For details about this operation in the FA model, see [Page Ability Development](https://gitee.com/openharmony/docs/blob/master/en/application-dev/ability/fa-pageability.md).
......@@ -189,35 +197,43 @@ As the entry of the ability continuation capability, **continuationManager** is
let deviceConnectStatus: continuationManager.DeviceConnectState = continuationManager.DeviceConnectState.CONNECTED;
// The token parameter is the token obtained during the registration, and the remoteDeviceId parameter is the remoteDeviceId obtained.
continuationManager.updateConnectStatus(token, remoteDeviceId, deviceConnectStatus).then((data) => {
console.info('updateConnectStatus finished, ' + JSON.stringify(data));
try {
continuationManager.updateContinuationState(token, remoteDeviceId, deviceConnectStatus).then((data) => {
console.info('updateContinuationState finished, ' + JSON.stringify(data));
}).catch((err) => {
console.error('updateConnectStatus failed, cause: ' + JSON.stringify(err));
console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
});
} catch (err) {
console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
}
```
Listen for the device disconnection state so that the user can stop cross-device continuation or collaboration in time. The sample code is as follows:
```ts
try {
// The token parameter is the token obtained during the registration.
continuationManager.on("deviceDisconnect", token, (deviceIds) => {
console.info('onDeviceDisconnect len: ' + deviceIds.length);
if (deviceIds.length <= 0) {
continuationManager.on("deviceUnselected", token, (continuationResults) => {
console.info('onDeviceUnselected len: ' + continuationResults.length);
if (continuationResults.length <= 0) {
console.info('no unselected device');
return;
}
// Update the device connection state.
let unselectedDeviceId: string = deviceIds[0]; // Assign the deviceId of the first deselected remote device to the unselectedDeviceId variable.
let unselectedDeviceId: string = continuationResults[0].id; // Assign the deviceId of the first deselected remote device to the unselectedDeviceId variable.
let deviceConnectStatus: continuationManager.DeviceConnectState = continuationManager.DeviceConnectState.DISCONNECTING; // Device disconnected.
// The token parameter is the token obtained during the registration, and the unselectedDeviceId parameter is the unselectedDeviceId obtained.
continuationManager.updateConnectStatus(token, unselectedDeviceId, deviceConnectStatus).then((data) => {
console.info('updateConnectStatus finished, ' + JSON.stringify(data));
continuationManager.updateContinuationState(token, unselectedDeviceId, deviceConnectStatus).then((data) => {
console.info('updateContinuationState finished, ' + JSON.stringify(data));
}).catch((err) => {
console.error('updateConnectStatus failed, cause: ' + JSON.stringify(err));
console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
});
});
} catch (err) {
console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
}
```
5. Start the device selection module to show the list of available devices on the network.
......@@ -231,12 +247,16 @@ As the entry of the ability continuation capability, **continuationManager** is
continuationMode: continuationManager.ContinuationMode.COLLABORATION_SINGLE // Single-choice mode of the device selection module.
};
try {
// The token parameter is the token obtained during the registration.
continuationManager.startDeviceManager(token, continuationExtraParams).then((data) => {
console.info('startDeviceManager finished, ' + JSON.stringify(data));
continuationManager.startContinuationDeviceManager(token, continuationExtraParams).then((data) => {
console.info('startContinuationDeviceManager finished, ' + JSON.stringify(data));
}).catch((err) => {
console.error('startDeviceManager failed, cause: ' + JSON.stringify(err));
console.error('startContinuationDeviceManager failed, cause: ' + JSON.stringify(err));
});
} catch (err) {
console.error('startContinuationDeviceManager failed, cause: ' + JSON.stringify(err));
}
```
6. If you do not need to perform cross-device migration or collaboration operations, you can deregister the continuation management service, by passing the token obtained during the registration.
......@@ -244,10 +264,14 @@ As the entry of the ability continuation capability, **continuationManager** is
The sample code is as follows:
```ts
try {
// The token parameter is the token obtained during the registration.
continuationManager.unregister(token).then((data) => {
console.info('unregister finished, ' + JSON.stringify(data));
continuationManager.unregisterContinuation(token).then((data) => {
console.info('unregisterContinuation finished, ' + JSON.stringify(data));
}).catch((err) => {
console.error('unregister failed, cause: ' + JSON.stringify(err));
console.error('unregisterContinuation failed, cause: ' + JSON.stringify(err));
});
} catch (err) {
console.error('unregisterContinuation failed, cause: ' + JSON.stringify(err));
}
```
# Page Ability Development
## Overview
### Concepts
The Page ability implements the ArkUI and provides the capability of interacting with developers. When you create an ability in DevEco Studio, DevEco Studio automatically creates template code. The capabilities related to the Page ability are implemented through the **featureAbility**, and the lifecycle callbacks are implemented through the callbacks in **app.js** or **app.ets**.
The Page ability implements the ArkUI and provides the capability of interacting with developers. When you create an ability in DevEco Studio, DevEco Studio automatically creates template code.
The capabilities related to the Page ability are implemented through the **featureAbility**, and the lifecycle callbacks are implemented through the callbacks in **app.js** or **app.ets**.
### Page Ability Lifecycle
**Ability lifecycle**
Introduction to the Page ability lifecycle:
The Page ability lifecycle defines all states of a Page ability, such as **INACTIVE**, **ACTIVE**, and **BACKGROUND**.
......@@ -27,27 +31,30 @@ Description of ability lifecycle states:
- **BACKGROUND**: The Page ability runs in the background. After being re-activated, the Page ability enters the **ACTIVE** state. After being destroyed, the Page ability enters the **INITIAL** state.
**The following figure shows the relationship between lifecycle callbacks and lifecycle states of the Page ability.**
The following figure shows the relationship between lifecycle callbacks and lifecycle states of the Page ability.
![fa-pageAbility-lifecycle](figures/fa-pageAbility-lifecycle.png)
You can override the lifecycle callbacks provided by the Page ability in the **app.js** or **app.ets** file. Currently, the **app.js** file provides only the **onCreate** and **onDestroy** callbacks, and the **app.ets** file provides the full lifecycle callbacks.
### Launch Type
The ability supports two launch types: singleton and multi-instance.
You can specify the launch type by setting **launchType** in the **config.json** file.
**Table 1** Introduction to startup mode
**Table 1** Startup modes
| Launch Type | Description |Description |
| ----------- | ------- |---------------- |
| standard | Multi-instance | A new instance is started each time an ability starts.|
| singleton | Singleton | Only one instance exists in the system. If an instance already exists when an ability is started, that instance is reused.|
| singleton | Singleton | The ability has only one instance in the system. If an instance already exists when an ability is started, that instance is reused.|
By default, **singleton** is used.
## Development Guidelines
### Available APIs
**Table 2** APIs provided by featureAbility
......@@ -73,8 +80,7 @@ By default, **singleton** is used.
```javascript
import featureAbility from '@ohos.ability.featureAbility'
featureAbility.startAbility({
want:
{
want: {
action: "",
entities: [""],
type: "",
......@@ -83,13 +89,12 @@ By default, **singleton** is used.
/* In the FA model, abilityName consists of package and ability name. */
abilityName: "com.example.entry.secondAbility",
uri: ""
},
},
);
}
});
```
### Starting a Remote Page Ability
>Note
>NOTE
>
>This feature applies only to system applications, since the **getTrustedDeviceListSync** API of the **DeviceManager** class is open only to system applications.
......
......@@ -10,7 +10,7 @@ IPC/RPC enables a proxy and a stub that run on different processes to communicat
| Class/Interface | Function | Description |
| --------------- | -------- | ----------- |
| IRemoteBroker | sptr<IRemoteObject> AsObject() | Obtains the holder of a remote proxy object. This method must be implemented by the derived classes of **IRemoteBroker**. If you call this method on the stub, the **RemoteObject** is returned; if you call this method on the proxy, the proxy object is returned. |
| [IRemoteBroker](../reference/apis/js-apis-rpc.md#iremotebroker) | sptr<IRemoteObject> AsObject() | Obtains the holder of a remote proxy object. This method must be implemented by the derived classes of **IRemoteBroker**. If you call this method on the stub, the **RemoteObject** is returned; if you call this method on the proxy, the proxy object is returned. |
| IRemoteStub | virtual int OnRemoteRequest(uint32_t code, MessageParcel &data, MessageParcel &reply, MessageOption &option) | Called to process a request from the proxy and return the result. Derived classes need to override this method. |
| IRemoteProxy | | Service proxy classes are derived from the **IRemoteProxy** class. |
......
......@@ -66,25 +66,7 @@ To learn more about the APIs for obtaining device location information, see [Geo
If your application needs to access the device location information when running on the background, it must be configured to be able to run on the background and be granted the **ohos.permission.LOCATION_IN_BACKGROUND** permission. In this way, the system continues to report device location information after your application moves to the background.
To allow your application to access device location information, declare the required permissions in the **module.json** file of your application. The sample code is as follows:
```
{
"module": {
"reqPermissions": [
"name": "ohos.permission.LOCATION",
"reason": "$string:reason_description",
"usedScene": {
"ability": ["com.myapplication.LocationAbility"],
"when": "inuse"
}
]
}
}
```
For details about these fields, see [Application Package Structure Configuration File](../quick-start/stage-structure.md).
You can declare the required permission in your application's configuration file. For details, see [Access Control (Permission) Development](../security/accesstoken-guidelines.md).
2. Import the **geolocation** module by which you can implement all APIs related to the basic location capabilities.
......
......@@ -15,6 +15,7 @@
- [Video Playback Development](video-playback.md)
- [Video Recording Development](video-recorder.md)
- Image
- [Image Development](image.md)
......
# Audio Capture Development
## When to Use
## Introduction
You can use the APIs provided by **AudioCapturer** to record raw audio files.
You can use the APIs provided by **AudioCapturer** to record raw audio files, thereby implementing audio data collection.
### State Check
**Status check**: During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioCapturer** instance. This is because some operations can be performed only when the audio capturer is in a given state. If the application performs an operation when the audio capturer is not in the given state, the system may throw an exception or generate other undefined behavior.
During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioCapturer** instance. This is because some operations can be performed only when the audio capturer is in a given state. If the application performs an operation when the audio capturer is not in the given state, the system may throw an exception or generate other undefined behavior.
## Working Principles
For details about the APIs, see [AudioCapturer in Audio Management](../reference/apis/js-apis-audio.md#audiocapturer8).
This following figure shows the audio capturer state transitions.
**Figure 1** Audio capturer state transitions
![audio-capturer-state](figures/audio-capturer-state.png)
- **PREPARED**: The audio capturer enters this state by calling **create()**.
- **RUNNING**: The audio capturer enters this state by calling **start()** when it is in the **PREPARED** state or by calling **start()** when it is in the **STOPPED** state.
- **STOPPED**: The audio capturer in the **RUNNING** state can call **stop()** to stop playing audio data.
- **RELEASED**: The audio capturer in the **PREPARED** or **STOPPED** state can use **release()** to release all occupied hardware and software resources. It will not transit to any other state after it enters the **RELEASED** state.
**Figure 1** Audio capturer state
## Constraints
![](figures/audio-capturer-state.png)
Before developing the audio data collection feature, configure the **ohos.permission.MICROPHONE** permission for your application. For details about permission configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop
For details about the APIs, see [AudioCapturer in Audio Management](../reference/apis/js-apis-audio.md#audiocapturer8).
1. Use **createAudioCapturer()** to create an **AudioCapturer** instance.
Set parameters of the **AudioCapturer** instance in **audioCapturerOptions**. This instance is used to capture audio, control and obtain the recording status, and register a callback for notification.
Set parameters of the **AudioCapturer** instance in **audioCapturerOptions**. This instance is used to capture audio, control and obtain the recording state, and register a callback for notification.
```js
var audioStreamInfo = {
import audio from '@ohos.multimedia.audio';
let audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
}
var audioCapturerInfo = {
let audioCapturerInfo = {
source: audio.SourceType.SOURCE_TYPE_MIC,
capturerFlags: 1
capturerFlags: 0 // 0 is the extended flag bit of the audio capturer. The default value is 0.
}
var audioCapturerOptions = {
let audioCapturerOptions = {
streamInfo: audioStreamInfo,
capturerInfo: audioCapturerInfo
}
let audioCapturer = await audio.createAudioCapturer(audioCapturerOptions);
var state = audioRenderer.state;
```
2. (Optional) Use **on('stateChange')** to subscribe to audio renderer state changes.
If an application needs to perform some operations when the audio renderer state is updated, the application can subscribe to the state changes. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
```js
audioCapturer.on('stateChange',(state) => {
console.info('AudioCapturerLog: Changed State to : ' + state)
switch (state) {
case audio.AudioState.STATE_PREPARED:
console.info('--------CHANGE IN AUDIO STATE----------PREPARED--------------');
console.info('Audio State is : Prepared');
break;
case audio.AudioState.STATE_RUNNING:
console.info('--------CHANGE IN AUDIO STATE----------RUNNING--------------');
console.info('Audio State is : Running');
break;
case audio.AudioState.STATE_STOPPED:
console.info('--------CHANGE IN AUDIO STATE----------STOPPED--------------');
console.info('Audio State is : stopped');
break;
case audio.AudioState.STATE_RELEASED:
console.info('--------CHANGE IN AUDIO STATE----------RELEASED--------------');
console.info('Audio State is : released');
break;
default:
console.info('--------CHANGE IN AUDIO STATE----------INVALID--------------');
console.info('Audio State is : invalid');
break;
}
});
console.log('AudioRecLog: Create audio capturer success.');
```
3. Use **start()** to start audio recording.
2. Use **start()** to start audio recording.
The capturer state will be **STATE_RUNNING** once the audio capturer is started. The application can then begin reading buffers.
```js
import audio from '@ohos.multimedia.audio';
async function startCapturer() {
let state = audioCapturer.state;
// The audio capturer should be in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state after being started.
if (state != audio.AudioState.STATE_PREPARED || state != audio.AudioState.STATE_PAUSED ||
state != audio.AudioState.STATE_STOPPED) {
console.info('Capturer is not in a correct state to start');
return;
}
await audioCapturer.start();
if (audioCapturer.state == audio.AudioState.STATE_RUNNING) {
let state = audioCapturer.state;
if (state == audio.AudioState.STATE_RUNNING) {
console.info('AudioRecLog: Capturer started');
} else {
console.info('AudioRecLog: Capturer start failed');
console.error('AudioRecLog: Capturer start failed');
}
}
```
4. Use **getBufferSize()** to obtain the minimum buffer size to read.
```js
var bufferSize = await audioCapturer.getBufferSize();
console.info('AudioRecLog: buffer size: ' + bufferSize);
```
5. Read the captured audio data and convert it to a byte stream. Call **read()** repeatedly to read the data until the application wants to stop the recording.
3. Read the captured audio data and convert it to a byte stream. Call **read()** repeatedly to read the data until the application stops the recording.
The following example shows how to write recorded data into a file.
```js
import fileio from '@ohos.fileio';
const path = '/data/data/.pulse_dir/capture_js.wav';
let state = audioCapturer.state;
// The read operation can be performed only when the state is STATE_RUNNING.
if (state != audio.AudioState.STATE_RUNNING) {
console.info('Capturer is not in a correct state to read');
return;
}
const path = '/data/data/.pulse_dir/capture_js.wav'; // Path for storing the collected audio file.
let fd = fileio.openSync(path, 0o102, 0o777);
if (fd !== null) {
console.info('AudioRecLog: file fd created');
......@@ -115,38 +110,140 @@ If an application needs to perform some operations when the audio renderer state
console.info('AudioRecLog: file fd opened in append mode');
}
var numBuffersToCapture = 150;
let numBuffersToCapture = 150; // Write data for 150 times.
while (numBuffersToCapture) {
var buffer = await audioCapturer.read(bufferSize, true);
let buffer = await audioCapturer.read(bufferSize, true);
if (typeof(buffer) == undefined) {
console.info('read buffer failed');
console.info('AudioRecLog: read buffer failed');
} else {
var number = fileio.writeSync(fd, buffer);
console.info('AudioRecLog: data written: ' + number);
let number = fileio.writeSync(fd, buffer);
console.info(`AudioRecLog: data written: ${number}`);
}
numBuffersToCapture--;
}
```
6. Once the recording is complete, call **stop()** to stop the recording.
4. Once the recording is complete, call **stop()** to stop the recording.
```js
async function StopCapturer() {
let state = audioCapturer.state;
// The audio capturer can be stopped only when it is in STATE_RUNNING or STATE_PAUSED state.
if (state != audio.AudioState.STATE_RUNNING && state != audio.AudioState.STATE_PAUSED) {
console.info('AudioRecLog: Capturer is not running or paused');
return;
}
```
await audioCapturer.stop();
if (audioCapturer.state == audio.AudioState.STATE_STOPPED) {
state = audioCapturer.state;
if (state == audio.AudioState.STATE_STOPPED) {
console.info('AudioRecLog: Capturer stopped');
} else {
console.info('AudioRecLog: Capturer stop failed');
console.error('AudioRecLog: Capturer stop failed');
}
}
```
7. After the task is complete, call **release()** to release related resources.
5. After the task is complete, call **release()** to release related resources.
```js
async function releaseCapturer() {
let state = audioCapturer.state;
// The audio capturer can be released only when it is not in the STATE_RELEASED or STATE_NEW state.
if (state == audio.AudioState.STATE_RELEASED || state == audio.AudioState.STATE_NEW) {
console.info('AudioRecLog: Capturer already released');
return;
}
await audioCapturer.release();
if (audioCapturer.state == audio.AudioState.STATE_RELEASED) {
state = audioCapturer.state;
if (state == audio.AudioState.STATE_RELEASED) {
console.info('AudioRecLog: Capturer released');
} else {
console.info('AudioRecLog: Capturer release failed');
}
}
```
6. (Optional) Obtain the audio capturer information.
You can use the following code to obtain the audio capturer information:
```js
// Obtain the audio capturer state.
let state = audioCapturer.state;
// Obtain the audio capturer information.
let audioCapturerInfo : audio.AuduioCapturerInfo = await audioCapturer.getCapturerInfo();
// Obtain the audio stream information.
let audioStreamInfo : audio.AudioStreamInfo = await audioCapturer.getStreamInfo();
// Obtain the audio stream ID.
let audioStreamId : number = await audioCapturer.getAudioStreamId();
// Obtain the Unix timestamp, in nanoseconds.
let audioTime : number = await audioCapturer.getAudioTime();
// Obtain a proper minimum buffer size.
let bufferSize : number = await audioCapturer.getBuffersize();
```
7. (Optional) Use **on('markReach')** to subscribe to the mark reached event, and use **off('markReach')** to unsubscribe from the event.
After the mark reached event is subscribed to, when the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned.
```js
audioCapturer.on('markReach', (reachNumber) => {
console.info('Mark reach event Received');
console.info(`The Capturer reached frame: ${reachNumber}`);
});
audioCapturer.off('markReach'); // Unsubscribe from the mark reached event. This event will no longer be listened for.
```
8. (Optional) Use **on('periodReach')** to subscribe to the period reached event, and use **off('periodReach')** to unsubscribe from the event.
After the period reached event is subscribed to, each time the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned.
```js
audioCapturer.on('periodReach', (reachNumber) => {
console.info('Period reach event Received');
console.info(`In this period, the Capturer reached frame: ${reachNumber}`);
});
audioCapturer.off('periodReach'); // Unsubscribe from the period reached event. This event will no longer be listened for.
```
9. If your application needs to perform some operations when the audio capturer state is updated, it can subscribe to the state change event. When the audio capturer state is updated, the application receives a callback containing the event type.
```js
audioCapturer.on('stateChange', (state) => {
console.info(`AudioCapturerLog: Changed State to : ${state}`)
switch (state) {
case audio.AudioState.STATE_PREPARED:
console.info('--------CHANGE IN AUDIO STATE----------PREPARED--------------');
console.info('Audio State is : Prepared');
break;
case audio.AudioState.STATE_RUNNING:
console.info('--------CHANGE IN AUDIO STATE----------RUNNING--------------');
console.info('Audio State is : Running');
break;
case audio.AudioState.STATE_STOPPED:
console.info('--------CHANGE IN AUDIO STATE----------STOPPED--------------');
console.info('Audio State is : stopped');
break;
case audio.AudioState.STATE_RELEASED:
console.info('--------CHANGE IN AUDIO STATE----------RELEASED--------------');
console.info('Audio State is : released');
break;
default:
console.info('--------CHANGE IN AUDIO STATE----------INVALID--------------');
console.info('Audio State is : invalid');
break;
}
});
```
# Audio Interruption Mode Development
## When to Use
The audio interruption mode is used to control the playback of multiple audio streams.<br>
Audio applications can set the audio interruption mode to independent or shared under **AudioRenderer**.<br>
In shared mode, multiple audio streams share one session ID. In independent mode, each audio stream has an independent session ID.
## Introduction
The audio interruption mode is used to control the playback of multiple audio streams.
Audio applications can set the audio interruption mode to independent or shared under **AudioRenderer**.
### Asynchronous Operations
In shared mode, multiple audio streams share one session ID. In independent mode, each audio stream has an independent session ID.
To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions.
**Asynchronous operation**: To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions.
## How to Develop
For details about the APIs, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8).
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.<br>
Set parameters of the **AudioRenderer** instance in **audioRendererOptions**.<br>
This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.<br>
Set parameters of the **AudioRenderer** instance in **audioRendererOptions**.
```js
This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.
```js
import audio from '@ohos.multimedia.audio';
var audioStreamInfo = {
......@@ -39,7 +40,7 @@ For details about the APIs, see [AudioRenderer in Audio Management](../reference
rendererInfo: audioRendererInfo
}
let audioRenderer = await audio.createAudioRenderer(audioRendererOptions);
let audioRenderer = await audio.createAudioRenderer(audioRendererOptions);
```
2. Set the audio interruption mode.
......
# Audio Playback Development
## When to Use
## Introduction
You can use audio playback APIs to convert audio data into audible analog signals, play the signals using output devices, and manage playback tasks.
You can use audio playback APIs to convert audio data into audible analog signals and play the signals using output devices. You can also manage playback tasks. For example, you can start, suspend, stop playback, release resources, set the volume, seek to a playback position, and obtain track information.
**Figure 1** Playback status
## Working Principles
The following figures show the audio playback status changes and the interaction with external modules for audio playback.
**Figure 1** Audio playback state transition
![en-us_image_audio_state_machine](figures/en-us_image_audio_state_machine.png)
**Note**: If the status is **Idle**, setting the **src** attribute does not change the status. In addition, after the **src** attribute is set successfully, you must call **reset()** before setting it to another value.
**NOTE**: If the status is **Idle**, setting the **src** attribute does not change the status. In addition, after the **src** attribute is set successfully, you must call **reset()** before setting it to another value.
**Figure 2** Layer 0 diagram of audio playback
**Figure 2** Interaction with external modules for audio playback
![en-us_image_audio_player](figures/en-us_image_audio_player.png)
**NOTE**: When a third-party application calls the JS interface provided by the JS interface layer to implement a feature, the framework layer invokes the audio component through the media service of the native framework and outputs the audio data decoded by the software to the audio HDI of the hardware interface layer to implement audio playback.
## How to Develop
For details about the APIs, see [AudioPlayer in the Media API](../reference/apis/js-apis-media.md#audioplayer).
> **NOTE**
>
> The method for obtaining the path in the FA model is different from that in the stage model. **pathDir** used in the sample code below is an example. You need to obtain the path based on project requirements. For details about how to obtain the path, see [Application Sandbox Path Guidelines](../reference/apis/js-apis-fileio.md#guidelines).
### Full-Process Scenario
The full audio playback process includes creating an instance, setting the URI, playing audio, seeking to the playback position, setting the volume, pausing playback, obtaining track information, stopping playback, resetting the player, and releasing resources.
......@@ -99,8 +109,9 @@ async function audioPlayerDemo() {
setCallBack(audioPlayer); // Set the event callbacks.
// 2. Set the URI of the audio file.
let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3';
let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath);
......@@ -118,6 +129,7 @@ async function audioPlayerDemo() {
```js
import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio'
export class AudioDemo {
// Set the player callbacks.
setCallBack(audioPlayer) {
......@@ -139,8 +151,9 @@ export class AudioDemo {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3';
let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath);
......@@ -159,6 +172,7 @@ export class AudioDemo {
```js
import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio'
export class AudioDemo {
// Set the player callbacks.
private isNextMusic = false;
......@@ -185,8 +199,9 @@ export class AudioDemo {
async nextMusic(audioPlayer) {
this.isNextMusic = true;
let nextFdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\02.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command.
let nextpath = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/02.mp3';
let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\02.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let nextpath = pathDir + '/02.mp3'
await fileIO.open(nextpath).then((fdNumber) => {
nextFdPath = nextFdPath + '' + fdNumber;
console.info('open fd success fd is' + nextFdPath);
......@@ -202,8 +217,9 @@ export class AudioDemo {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3';
let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath);
......@@ -222,6 +238,7 @@ export class AudioDemo {
```js
import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio'
export class AudioDemo {
// Set the player callbacks.
setCallBack(audioPlayer) {
......@@ -239,8 +256,9 @@ export class AudioDemo {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3';
let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath);
......
# Audio Recording Development
## When to Use
## Introduction
During audio recording, audio signals are captured, encoded, and saved to files. You can specify parameters such as the sampling rate, number of audio channels, encoding format, encapsulation format, and file path for audio recording.
During audio recording, audio signals are captured, encoded, and saved to files. You can specify parameters such as the sampling rate, number of audio channels, encoding format, encapsulation format, and output file path for audio recording.
## Working Principles
The following figures show the audio recording state transition and the interaction with external modules for audio recording.
**Figure 1** Audio recording state transition
......@@ -10,10 +14,16 @@ During audio recording, audio signals are captured, encoded, and saved to files.
**Figure 2** Layer 0 diagram of audio recording
**Figure 2** Interaction with external modules for audio recording
![en-us_image_audio_recorder_zero](figures/en-us_image_audio_recorder_zero.png)
**NOTE**: When a third-party recording application or recorder calls the JS interface provided by the JS interface layer to implement a feature, the framework layer invokes the audio component through the media service of the native framework to obtain the audio data captured through the audio HDI. The framework layer then encodes the audio data through software and saves the encoded and encapsulated audio data to a file to implement audio recording.
## Constraints
Before developing audio recording, configure the **ohos.permission.MICROPHONE** permission for your application. For details about the configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop
For details about the APIs, see [AudioRecorder in the Media API](../reference/apis/js-apis-media.md#audiorecorder).
......
# Audio Stream Management Development
## When to Use
## Introduction
You can use **AudioStreamManager** to manage audio streams.
### Development Process
## Working Principles
During application development, use **getStreamManager()** to create an **AudioStreamManager** instance. Then, you can call **on('audioRendererChange')** or **on('audioCapturerChange')** to listen for status, client, and audio attribute changes of the audio playback or recording application. To cancel the listening for these changes, call **off('audioRendererChange')** or **off('audioCapturerChange')**. You can call **getCurrentAudioRendererInfoArray()** to obtain information about the audio playback application, such as the unique audio stream ID, UID of the audio playback client, and audio status. Similarly, you can call **getCurrentAudioCapturerInfoArray()** to obtain information about the audio recording application. The figure below shows the invoking relationship.
The following figure shows the calling relationship of **AudioStreamManager** APIs.
For details about the APIs, see [AudioStreamManager](../reference/apis/js-apis-audio.md#audiostreammanager9).
**Figure 1** AudioStreamManager API calling relationship
**Figure 1** Audio stream management invoking relationship
![en-us_image_audio_stream_manager](figures/en-us_image_audio_stream_manager.png)
![](figures/en-us_image_audio_stream_manager.png)
**NOTE**: During application development, use **getStreamManager()** to create an **AudioStreamManager** instance. Then, you can call **on('audioRendererChange')** or **on('audioCapturerChange')** to listen for status, client, and audio attribute changes of the audio playback or recording application. To cancel the listening for these changes, call **off('audioRendererChange')** or **off('audioCapturerChange')**. You can call **getCurrentAudioRendererInfoArray()** to obtain information about the audio playback application, such as the unique audio stream ID, UID of the audio playback client, and audio status. Similarly, you can call **getCurrentAudioCapturerInfoArray()** to obtain information about the audio recording application.
## How to Develop
For details about the APIs, see [AudioStreamManager](../reference/apis/js-apis-audio.md#audiostreammanager9).
1. Create an **AudioStreamManager** instance.
Before using **AudioStreamManager** APIs, you must use **getStreamManager()** to create an **AudioStreamManager** instance.
```js
var audioStreamManager = audio.getStreamManager();
var audioManager = audio.getAudioManager();
var audioStreamManager = audioManager.getStreamManager();
```
2. (Optional) Call **on('audioRendererChange')** to listen for audio renderer changes.
If an application needs to receive notifications when the audio playback application status, audio playback client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
If an application needs to receive notifications when the audio playback application status, audio playback client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
```js
audioStreamManager.on('audioRendererChange', (AudioRendererChangeInfoArray) => {
......@@ -61,7 +65,8 @@ If an application needs to receive notifications when the audio playback applica
```
4. (Optional) Call **on('audioCapturerChange')** to listen for audio capturer changes.
If an application needs to receive notifications when the audio recording application status, audio recording client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
If an application needs to receive notifications when the audio recording application status, audio recording client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
```js
audioStreamManager.on('audioCapturerChange', (AudioCapturerChangeInfoArray) => {
......@@ -94,7 +99,8 @@ If an application needs to receive notifications when the audio recording applic
```
6. (Optional) Call **getCurrentAudioRendererInfoArray()** to obtain information about the current audio renderer.
This API can be used to obtain the unique ID of the audio stream, UID of the audio playback client, audio status, and other information about the audio player. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly.
This API can be used to obtain the unique ID of the audio stream, UID of the audio playback client, audio status, and other information about the audio player. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly.
```js
await audioStreamManager.getCurrentAudioRendererInfoArray().then( function (AudioRendererChangeInfoArray) {
......@@ -127,7 +133,7 @@ This API can be used to obtain the unique ID of the audio stream, UID of the aud
```
7. (Optional) Call **getCurrentAudioCapturerInfoArray()** to obtain information about the current audio capturer.
This API can be used to obtain the unique ID of the audio stream, UID of the audio recording client, audio status, and other information about the audio capturer. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly.
This API can be used to obtain the unique ID of the audio stream, UID of the audio recording client, audio status, and other information about the audio capturer. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly.
```js
await audioStreamManager.getCurrentAudioCapturerInfoArray().then( function (AudioCapturerChangeInfoArray) {
......
此差异已折叠。
# OpenSL ES Audio Recording Development
## When to Use
## Introduction
You can use OpenSL ES to develop the audio recording function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned.
......
# OpenSL ES Audio Playback Development
## When to Use
## Introduction
You can use OpenSL ES to develop the audio playback function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned.
......@@ -58,7 +58,7 @@ To use OpenSL ES to develop the audio playback function in OpenHarmony, perform
5. Obtain the **bufferQueueItf** instance of the **SL_IID_OH_BUFFERQUEUE** interface.
```
```c++
SLOHBufferQueueItf bufferQueueItf;
(*pcmPlayerObject)->GetInterface(pcmPlayerObject, SL_IID_OH_BUFFERQUEUE, &bufferQueueItf);
```
......
# Video Playback Development
## When to Use
## Introduction
You can use video playback APIs to convert video data into visible signals, play the signals using output devices, and manage playback tasks. This document describes development for the following video playback scenarios: full-process, normal playback, video switching, and loop playback.
You can use video playback APIs to convert audio data into audible analog signals and play the signals using output devices. You can also manage playback tasks. For example, you can start, suspend, stop playback, release resources, set the volume, seek to a playback position, set the playback speed, and obtain track information. This document describes development for the following video playback scenarios: full-process, normal playback, video switching, and loop playback.
## Working Principles
The following figures show the video playback state transition and the interaction with external modules for video playback.
**Figure 1** Video playback state transition
![en-us_image_video_state_machine](figures/en-us_image_video_state_machine.png)
**Figure 2** Layer 0 diagram of video playback
**Figure 2** Interaction with external modules for video playback
![en-us_image_video_player](figures/en-us_image_video_player.png)
**NOTE**: When a third-party application calls a JS interface provided by the JS interface layer, the framework layer invokes the audio component through the media service of the native framework to output the audio data decoded by the software to the audio HDI. The graphics subsystem outputs the image data decoded by the codec HDI at the hardware interface layer to the display HDI. In this way, video playback is implemented.
*Note: Video playback requires hardware capabilities such as display, audio, and codec.*
1. A third-party application obtains a surface ID from the XComponent.
......
# Video Recording Development
## When to Use
## Introduction
During video recording, audio and video signals are captured, encoded, and saved to files. You can specify parameters such as the encoding format, encapsulation format, and file path for video recording.
You can use video recording APIs to capture audio and video signals, encode them, and save them to files. You can start, suspend, resume, and stop recording, and release resources. You can also specify parameters such as the encoding format, encapsulation format, and file path for video recording.
## Working Principles
The following figures show the video recording state transition and the interaction with external modules for video recording.
**Figure 1** Video recording state transition
![en-us_image_video_recorder_state_machine](figures/en-us_image_video_recorder_state_machine.png)
**Figure 2** Layer 0 diagram of video recording
**Figure 2** Interaction with external modules for video recording
![en-us_image_video_recorder_zero](figures/en-us_image_video_recorder_zero.png)
**NOTE**: When a third-party camera application or system camera calls a JS interface provided by the JS interface layer, the framework layer uses the media service of the native framework to invoke the audio component. Through the audio HDI, the audio component captures audio data, encodes the audio data through software, and saves the encoded audio data to a file. The graphics subsystem captures image data through the video HDI, encodes the image data through the video codec HDI, and saves the encoded image data to a file. In this way, video recording is implemented.
## Constraints
Before developing video recording, configure the permissions **ohos.permission.MICROPHONE** and **ohos.permission.CAMERA** for your application. For details about the configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop
For details about the APIs, see [VideoRecorder in the Media API](../reference/apis/js-apis-media.md#videorecorder9).
......@@ -147,3 +157,4 @@ export class VideoRecorderDemo {
}
}
```
......@@ -4,6 +4,4 @@
- [Drawing Development](drawing-guidelines.md)
- [Raw File Development](rawfile-guidelines.md)
- [Native Window Development](native-window-guidelines.md)
- [Using MindSpore Lite for Model Inference](native-window-guidelines.md)
- [Using MindSpore Lite for Model Inference](mindspore-lite-guidelines.md)
# Rendering Control
ArkTS provides conditional rendering and loop rendering. Conditional rendering can render state-specific UI content based on the application status. Loop rendering iteratively obtains data from the data source and creates the corresponding component during each iteration.
## Conditional Rendering
Use **if/else** for conditional rendering.
> **NOTE**
>
> - State variables can be used in the **if/else** statement.
>
> - The **if/else** statement can be used to implement rendering of child components.
>
> - The **if/else** statement must be used in container components.
>
> - Some container components limit the type or number of subcomponents. When **if/else** is placed in these components, the limitation applies to components created in **if/else** statements. For example, when **if/else** is used in the **\<Grid>** container component, whose child components can only be **\<GridItem>**, only the **\<GridItem>** component can be used in the **if/else** statement.
```ts
Column() {
if (this.count < 0) {
Text('count is negative').fontSize(14)
} else if (this.count % 2 === 0) {
Text('count is even').fontSize(14)
} else {
Text('count is odd').fontSize(14)
}
}
```
## Loop Rendering
You can use **ForEach** to obtain data from arrays and create components for each data item.
```
ForEach(
arr: any[],
itemGenerator: (item: any, index?: number) => void,
keyGenerator?: (item: any, index?: number) => string
)
```
**Parameters**
| Name | Type | Mandatory| Description |
| ------------- | ------------------------------------- | ---- | ------------------------------------------------------------ |
| arr | any[] | Yes | An array, which can be empty, in which case no child component is created. The functions that return array-type values are also allowed, for example, **arr.slice (1, 3)**. The set functions cannot change any state variables including the array itself, such as **Array.splice**, **Array.sort**, and **Array.reverse**.|
| itemGenerator | (item: any, index?: number) => void | Yes | A lambda function used to generate one or more child components for each data item in an array. A single child component or a list of child components must be included in parentheses.|
| keyGenerator | (item: any, index?: number) => string | No | An anonymous function used to generate a unique and fixed key value for each data item in an array. This key value must remain unchanged for the data item even when the item is relocated in the array. When the item is replaced by a new item, the key value of the new item must be different from that of the existing item. This key-value generator is optional. However, for performance reasons, it is strongly recommended that the key-value generator be provided, so that the development framework can better identify array changes. For example, if no key-value generator is provided, a reverse of an array will result in rebuilding of all nodes in **ForEach**.|
> **NOTE**
>
> - **ForEach** must be used in container components.
>
> - The generated child components should be allowed in the parent container component of **ForEach**.
>
> - The **itemGenerator** function can contain an **if/else** statement, and an **if/else** statement can contain **ForEach**.
>
> - The call sequence of **itemGenerator** functions may be different from that of the data items in the array. During the development, do not assume whether or when the **itemGenerator** and **keyGenerator** functions are executed. The following is an example of incorrect usage:
>
> ```ts
> ForEach(anArray.map((item1, index1) => { return { i: index1 + 1, data: item1 }; }),
> item => Text(`${item.i}. item.data.label`),
> item => item.data.id.toString())
> ```
## Example
```ts
// xxx.ets
@Entry
@Component
struct MyComponent {
@State arr: number[] = [10, 20, 30]
build() {
Column({ space: 5 }) {
Button('Reverse Array')
.onClick(() => {
this.arr.reverse()
})
ForEach(this.arr, (item: number) => {
Text(`item value: ${item}`).fontSize(18)
Divider().strokeWidth(2)
}, (item: number) => item.toString())
}
}
}
```
![forEach1](figures/forEach1.gif)
## Lazy Loading
You can use **LazyForEach** to iterate over provided data sources and create corresponding components during each iteration.
```ts
LazyForEach(
dataSource: IDataSource,
itemGenerator: (item: any) => void,
keyGenerator?: (item: any) => string
): void
interface IDataSource {
totalCount(): number;
getData(index: number): any;
registerDataChangeListener(listener: DataChangeListener): void;
unregisterDataChangeListener(listener: DataChangeListener): void;
}
interface DataChangeListener {
onDataReloaded(): void;
onDataAdd(index: number): void;
onDataMove(from: number, to: number): void;
onDataDelete(index: number): void;
onDataChange(index: number): void;
}
```
**Parameters**
| Name | Type | Mandatory| Description |
| ------------- | --------------------- | ---- | ------------------------------------------------------------ |
| dataSource | IDataSource | Yes | Object used to implement the **IDataSource** API. You need to implement related APIs. |
| itemGenerator | (item: any) => void | Yes | A lambda function used to generate one or more child components for each data item in an array. A single child component or a list of child components must be included in parentheses.|
| keyGenerator | (item: any) => string | No | An anonymous function used to generate a unique and fixed key value for each data item in an array. This key value must remain unchanged for the data item even when the item is relocated in the array. When the item is replaced by a new item, the key value of the new item must be different from that of the existing item. This key-value generator is optional. However, for performance reasons, it is strongly recommended that the key-value generator be provided, so that the development framework can better identify array changes. For example, if no key-value generator is provided, a reverse of an array will result in rebuilding of all nodes in **LazyForEach**.|
### Description of IDataSource
| Name | Description |
| ------------------------------------------------------------ | ---------------------- |
| totalCount(): number | Obtains the total number of data records. |
| getData(index: number): any | Obtains the data corresponding to the specified index. |
| registerDataChangeListener(listener:DataChangeListener): void | Registers a listener for data changes.|
| unregisterDataChangeListener(listener:DataChangeListener): void | Deregisters a listener for data changes.|
### Description of DataChangeListener
| Name | Description |
| -------------------------------------------------------- | -------------------------------------- |
| onDataReloaded(): void | Invoked when all data is reloaded. |
| onDataAdded(index: number): void (deprecated) | Invoked when data is added to the position indicated by the specified index. |
| onDataMoved(from: number, to: number): void (deprecated) | Invoked when data is moved from the **from** position to the **to** position.|
| onDataDeleted(index: number): void (deprecated) | Invoked when data is deleted from the position indicated by the specified index. |
| onDataChanged(index: number): void (deprecated) | Invoked when data in the position indicated by the specified index is changed. |
| onDataAdd(index: number): void 8+ | Invoked when data is added to the position indicated by the specified index. |
| onDataMove(from: number, to: number): void 8+ | Invoked when data is moved from the **from** position to the **to** position.|
| onDataDelete(index: number): void 8+ | Invoked when data is deleted from the position indicated by the specified index. |
| onDataChange(index: number): void 8+ | Invoked when data in the position indicated by the specified index is changed. |
## Example
```ts
// xxx.ets
class BasicDataSource implements IDataSource {
private listeners: DataChangeListener[] = []
public totalCount(): number {
return 0
}
public getData(index: number): any {
return undefined
}
registerDataChangeListener(listener: DataChangeListener): void {
if (this.listeners.indexOf(listener) < 0) {
console.info('add listener')
this.listeners.push(listener)
}
}
unregisterDataChangeListener(listener: DataChangeListener): void {
const pos = this.listeners.indexOf(listener);
if (pos >= 0) {
console.info('remove listener')
this.listeners.splice(pos, 1)
}
}
notifyDataReload(): void {
this.listeners.forEach(listener => {
listener.onDataReloaded()
})
}
notifyDataAdd(index: number): void {
this.listeners.forEach(listener => {
listener.onDataAdd(index)
})
}
notifyDataChange(index: number): void {
this.listeners.forEach(listener => {
listener.onDataChange(index)
})
}
notifyDataDelete(index: number): void {
this.listeners.forEach(listener => {
listener.onDataDelete(index)
})
}
notifyDataMove(from: number, to: number): void {
this.listeners.forEach(listener => {
listener.onDataMove(from, to)
})
}
}
class MyDataSource extends BasicDataSource {
// Initialize the data list.
private dataArray: string[] = ['/path/image0.png', '/path/image1.png', '/path/image2.png', '/path/image3.png']
public totalCount(): number {
return this.dataArray.length
}
public getData(index: number): any {
return this.dataArray[index]
}
public addData(index: number, data: string): void {
this.dataArray.splice(index, 0, data)
this.notifyDataAdd(index)
}
public pushData(data: string): void {
this.dataArray.push(data)
this.notifyDataAdd(this.dataArray.length - 1)
}
}
@Entry
@Component
struct MyComponent {
private data: MyDataSource = new MyDataSource()
build() {
List({ space: 3 }) {
LazyForEach(this.data, (item: string) => {
ListItem() {
Row() {
Image(item).width(50).height(50)
Text(item).fontSize(20).margin({ left: 10 })
}.margin({ left: 10, right: 10 })
}
.onClick(() => {
// The count increases by one each time the list is clicked.
this.data.pushData('/path/image' + this.data.totalCount() + '.png')
})
}, item => item)
}
}
}
```
> **NOTE**
>
> - **LazyForEach** must be used in the container component. Currently, only the **\<List>**, **\<Grid>**, and **\<Swiper>** components support lazy loading (that is, only the visible part and a small amount of data before and after the visible part are loaded for caching). For other components, all data is loaded at a time.
>
> - **LazyForEach** must create and only one child component in each iteration.
>
> - The generated child components must be allowed in the parent container component of **LazyForEach**.
>
> - **LazyForEach** can be included in an **if/else** statement, but cannot contain such a statement.
>
> - For the purpose of high-performance rendering, when the **onDataChange** method of the **DataChangeListener** object is used to update the UI, the component update is triggered only when the state variable is used in the child component created by **itemGenerator**.
>
> - The call sequence of **itemGenerator** functions may be different from that of the data items in the data source. During the development, do not assume whether or when the **itemGenerator** and **keyGenerator** functions are executed. The following is an example of incorrect usage:
>
> ```ts
> LazyForEach(dataSource,
> item => Text(`${item.i}. item.data.label`)),
> item => item.data.id.toString())
> ```
![lazyForEach](figures/lazyForEach.gif)
......@@ -45,6 +45,7 @@
- [@ohos.application.formInfo](js-apis-formInfo.md)
- [@ohos.application.formProvider](js-apis-formprovider.md)
- [@ohos.application.missionManager](js-apis-missionManager.md)
- [@ohos.application.quickFixManager](js-apis-application-quickFixManager.md)
- [@ohos.application.Want](js-apis-application-Want.md)
- [@ohos.continuation.continuationManager](js-apis-continuation-continuationExtraParams.md)
- [@ohos.continuation.continuationManager](js-apis-continuation-continuationManager.md)
......@@ -90,8 +91,8 @@
- UI Page
- [@ohos.animator](js-apis-animator.md)
- [@ohos.mediaquery](js-apis-mediaquery.md)
- [@ohos.promptAction](js-apis-promptAction.md)
- [@ohos.router](js-apis-router.md)
- [@ohos.uiAppearance](js-apis-uiappearance.md)
- Graphics
- [@ohos.animation.windowAnimationManager](js-apis-windowAnimationManager.md)
- [@ohos.display ](js-apis-display.md)
......@@ -145,7 +146,10 @@
- [@ohos.document](js-apis-document.md)
- [@ohos.environment](js-apis-environment.md)
- [@ohos.data.fileAccess](js-apis-fileAccess.md)
- [@ohos.fileExtensionInfo](js-apis-fileExtensionInfo.md)
- [@ohos.fileio](js-apis-fileio.md)
- [@ohos.filemanagement.userfile_manager](js-apis-userfilemanager.md)
- [@ohos.multimedia.medialibrary](js-apis-medialibrary.md)
- [@ohos.securityLabel](js-apis-securityLabel.md)
- [@ohos.statfs](js-apis-statfs.md)
......@@ -213,6 +217,7 @@
- [@ohos.geolocation](js-apis-geolocation.md)
- [@ohos.multimodalInput.inputConsumer](js-apis-inputconsumer.md)
- [@ohos.multimodalInput.inputDevice](js-apis-inputdevice.md)
- [@ohos.multimodalInput.inputDeviceCooperate](js-apis-cooperate.md)
- [@ohos.multimodalInput.inputEvent](js-apis-inputevent.md)
- [@ohos.multimodalInput.inputEventClient](js-apis-inputeventclient.md)
- [@ohos.multimodalInput.inputMonitor](js-apis-inputmonitor.md)
......@@ -261,6 +266,7 @@
- [@ohos.application.testRunner](js-apis-testRunner.md)
- [@ohos.uitest](js-apis-uitest.md)
- APIs No Longer Maintained
- [@ohos.bundleState](js-apis-deviceUsageStatistics.md)
- [@ohos.bytrace](js-apis-bytrace.md)
- [@ohos.data.storage](js-apis-data-storage.md)
- [@ohos.data.distributedData](js-apis-distributed-data.md)
......
......@@ -24,7 +24,9 @@ SystemCapability.BundleManager.DistributedBundleFramework
For details, see [Permission Levels](../../security/accesstoken-overview.md#permission-levels).
## distributedBundle.getRemoteAbilityInfo
## distributedBundle.getRemoteAbilityInfo<sup>deprecated<sup>
> This API is deprecated since API version 9. You are advised to use [getRemoteAbilityInfo](js-apis-distributedBundle.md) instead.
getRemoteAbilityInfo(elementName: ElementName, callback: AsyncCallback&lt;RemoteAbilityInfo&gt;): void;
......@@ -51,7 +53,9 @@ This is a system API and cannot be called by third-party applications.
## distributedBundle.getRemoteAbilityInfo
## distributedBundle.getRemoteAbilityInfo<sup>deprecated<sup>
> This API is deprecated since API version 9. You are advised to use [getRemoteAbilityInfo](js-apis-distributedBundle.md) instead.
getRemoteAbilityInfo(elementName: ElementName): Promise&lt;RemoteAbilityInfo&gt;
......@@ -81,7 +85,9 @@ This is a system API and cannot be called by third-party applications.
| ------------------------------------------------------------ | --------------------------------- |
| Promise\<[RemoteAbilityInfo](js-apis-bundle-remoteAbilityInfo.md)> | Promise used to return the remote ability information.|
## distributedBundle.getRemoteAbilityInfos
## distributedBundle.getRemoteAbilityInfos<sup>deprecated<sup>
> This API is deprecated since API version 9. You are advised to use [getRemoteAbilityInfo](js-apis-distributedBundle.md) instead.
getRemoteAbilityInfos(elementNames: Array&lt;ElementName&gt;, callback: AsyncCallback&lt;Array&lt;RemoteAbilityInfo&gt;&gt;): void;
......@@ -104,11 +110,13 @@ This is a system API and cannot be called by third-party applications.
| Name | Type | Mandatory| Description |
| ------------ | ------------------------------------------------------------ | ---- | -------------------------------------------------- |
| elementNames | Array<[ElementName](js-apis-bundle-ElementName.md)> | Yes | **ElementName** array, whose maximum length is 10. |
| callback | AsyncCallback< Array<[RemoteAbilityInfo](js-apis-bundle-remoteAbilityInfo.md)>> | Yes | Callback used to return an array of the remote ability information.|
| callback | AsyncCallback< Array<[RemoteAbilityInfo](js-apis-bundle-remoteAbilityInfo.md)>> | Yes | Callback used to return the remote ability information.|
## distributedBundle.getRemoteAbilityInfos<sup>deprecated<sup>
## distributedBundle.getRemoteAbilityInfos
> This API is deprecated since API version 9. You are advised to use [getRemoteAbilityInfo](js-apis-distributedBundle.md) instead.
getRemoteAbilityInfos(elementNames: Array&lt;ElementName&gt;): Promise&lt;Array&lt;RemoteAbilityInfo&gt;&gt;
......
# quickFixManager
The **quickFixManager** module provides APIs for quick fix. With quick fix, you can fix bugs in your application by applying patches, which is more efficient than by updating the entire application.
> **NOTE**
>
> The initial APIs of this module are supported since API version 9. Newly added APIs will be marked with a superscript to indicate their earliest API version.
## Modules to Import
```
import quickFixManager from '@ohos.application.quickFixManager';
```
## HapModuleQuickFixInfo
Defines the quick fix information at the HAP file level.
**System capability**: SystemCapability.Ability.AbilityRuntime.QuickFix
**System API**: This is a system API and cannot be called by third-party applications.
| Name | Readable/Writable| Type | Mandatory| Description |
| ----------- | -------- | -------------------- | ---- | ------------------------------------------------------------ |
| moduleName | Read only | string | Yes | Name of the HAP file. |
| originHapHash | Read only | string | Yes | Hash value of the HAP file. |
| quickFixFilePath | Read only | string | Yes | Installation path of the quick fix file. |
## ApplicationQuickFixInfo
Defines the quick fix information at the application level.
**System capability**: SystemCapability.Ability.AbilityRuntime.QuickFix
**System API**: This is a system API and cannot be called by third-party applications.
| Name | Readable/Writable| Type | Mandatory| Description |
| ----------- | -------- | -------------------- | ---- | ------------------------------------------------------------ |
| bundleName | Read only | string | Yes | Bundle name of the application. |
| bundleVersionCode | Read only | number | Yes | Internal version number of the application. |
| bundleVersionName | Read only | string | Yes | Version number of the application that is shown to users. |
| quickFixVersionCode | Read only | number | Yes | Version code of the quick fix patch package. |
| quickFixVersionName | Read only | string | Yes | Text description of the version number of the quick fix patch package. |
| hapModuleQuickFixInfo | Read only | Array\<[HapModuleQuickFixInfo](#hapmodulequickfixinfo)> | Yes | Quick fix information at the HAP file level. |
## quickFixManager.applyQuickFix
applyQuickFix(hapModuleQuickFixFiles: Array\<string>, callback: AsyncCallback\<void>): void;
Applies a quick fix patch. This API uses an asynchronous callback to return the result.
**Required permissions**: ohos.permission.INSTALL_BUNDLE
**System capability**: SystemCapability.Ability.AbilityRuntime.QuickFix
**System API**: This is a system API and cannot be called by third-party applications.
**Parameters**
| Parameter| Type| Mandatory| Description|
| -------- | -------- | -------- | -------- |
| hapModuleQuickFixFiles | Array\<string> | No| Quick fix files, each of which must contain a valid file path.|
| callback | AsyncCallback\<void> | No| Callback used to return the result.|
**Example**
```js
import quickFixManager from '@ohos.application.quickFixManager'
let hapModuleQuickFixFiles = ["/data/storage/el2/base/entry.hqf"]
quickFixManager.applyQuickFix(hapModuleQuickFixFiles, (error) => {
if (error) {
console.info( `applyQuickFix failed with error + ${error}`)
} else {
console.info( 'applyQuickFix success')
}
})
```
## quickFixManager.applyQuickFix
applyQuickFix(hapModuleQuickFixFiles: Array\<string>): Promise\<void>;
Applies a quick fix patch. This API uses a promise to return the result.
**Required permissions**: ohos.permission.INSTALL_BUNDLE
**System capability**: SystemCapability.Ability.AbilityRuntime.QuickFix
**System API**: This is a system API and cannot be called by third-party applications.
**Parameters**
| Parameter| Type| Mandatory| Description|
| -------- | -------- | -------- | -------- |
| hapModuleQuickFixFiles | Array\<string> | No| Quick fix files, each of which must contain a valid file path.|
**Return value**
| Type| Description|
| -------- | -------- |
| Promise\<void> | Promise used to return the result.|
**Example**
```js
import quickFixManager from '@ohos.application.quickFixManager'
let hapModuleQuickFixFiles = ["/data/storage/el2/base/entry.hqf"]
quickFixManager.applyQuickFix(hapModuleQuickFixFiles).then(() => {
console.info('applyQuickFix success')
}).catch((error) => {
console.info(`applyQuickFix err: + ${error}`)
})
```
## quickFixManager.getApplicationQuickFixInfo
getApplicationQuickFixInfo(bundleName: string, callback: AsyncCallback\<ApplicationQuickFixInfo>): void;
Obtains the quick fix information of the application. This API uses an asynchronous callback to return the result.
**Required permissions**: ohos.permission.GET_BUNDLE_INFO_PRIVILEGED
**System capability**: SystemCapability.Ability.AbilityRuntime.QuickFix
**System API**: This is a system API and cannot be called by third-party applications.
**Parameters**
| Parameter| Type| Mandatory| Description|
| -------- | -------- | -------- | -------- |
| bundleName | string | No|Bundle name of the application. |
| callback | AsyncCallback\<[ApplicationQuickFixInfo](#applicationquickfixinfo)> | No| Callback used to return the quick fix information.|
**Example**
```js
import quickFixManager from '@ohos.application.quickFixManager'
let bundleName = "bundleName"
quickFixManager.getApplicationQuickFixInfo(bundleName, (error, data) => {
if (error) {
console.info(`getApplicationQuickFixInfo error: + ${error}`)
} else {
console.info(`getApplicationQuickFixInfo success: + ${data}`)
}
})
```
## quickFixManager.getApplicationQuickFixInfo
getApplicationQuickFixInfo(bundleName: string): Promise\<ApplicationQuickFixInfo>;
Obtains the quick fix information of the application. This API uses a promise to return the result.
**Required permissions**: ohos.permission.GET_BUNDLE_INFO_PRIVILEGED
**System capability**: SystemCapability.Ability.AbilityRuntime.QuickFix
**System API**: This is a system API and cannot be called by third-party applications.
**Parameters**
| Parameter| Type| Mandatory| Description|
| -------- | -------- | -------- | -------- |
| bundleName | string | No| Bundle name of the application. |
**Return value**
| Type| Description|
| -------- | -------- |
| Promise\<[ApplicationQuickFixInfo](#applicationquickfixinfo)> | Promise used to return the quick fix information.|
**Example**
```js
import quickFixManager from '@ohos.application.quickFixManager'
let bundleName = "bundleName"
quickFixManager.getApplicationQuickFixInfo(bundleName).then((data) => {
console.info(`getApplicationQuickFixInfo success: + ${data}`)
}).catch((error) => {
console.info(`getApplicationQuickFixInfo err: + ${error}`)
})
```
......@@ -6,7 +6,9 @@ The **RemoteAbilityInfo** module provides information about a remote ability.
>
> The initial APIs of this module are supported since API version 8. Newly added APIs will be marked with a superscript to indicate their earliest API version.
## RemoteAbilityInfo
## RemoteAbilityInfo<sup>(deprecated)<sup>
> This API is deprecated since API version 9. You are advised to use [RemoteAbilityInfo](js-apis-bundleManager-remoteAbilityInfo.md) instead.
**System capability**: SystemCapability.BundleManager.DistributedBundleFramework
......
# ElementName
The **ElementName** module provides the element name information, which can be obtained through [Context.getElementName](js-apis-Context.md).
> **NOTE**
>
> The initial APIs of this module are supported since API version 9. Newly added APIs will be marked with a superscript to indicate their earliest API version.
## ElementName
**System capability**: SystemCapability.BundleManager.BundleFramework.Core
| Name | Type | Readable| Writable| Description |
| ----------------------- | ---------| ---- | ---- | ------------------------- |
| deviceId | string | Yes | Yes | Device ID. |
| bundleName | string | Yes | Yes | Bundle name of the application. |
| abilityName | string | Yes | Yes | Name of the ability. |
| uri | string | Yes | Yes | Resource ID. |
| shortName | string | Yes | Yes | Short name of the ability. |
| moduleName | string | Yes | Yes | Module name of the HAP file to which the ability belongs. |
# RemoteAbilityInfo
The **RemoteAbilityInfo** module provides information about a remote ability, which can be obtained through [distributedBundle.getRemoteAbilityInfo](js-apis-distributedBundle.md).
> **NOTE**
>
> The initial APIs of this module are supported since API version 9. Newly added APIs will be marked with a superscript to indicate their earliest API version.
## RemoteAbilityInfo
**System capability**: SystemCapability.BundleManager.DistributedBundleFramework
**System API**: This is a system API.
| Name | Type | Readable| Writable| Description |
| ----------- | -------------------------------------------- | ---- | ---- | ----------------------- |
| elementName | [ElementName](js-apis-bundleManager-elementName.md) | Yes | No | Element name information of the remote ability. |
| label | string | Yes | No | Label of the remote ability. |
| icon | string | Yes | No | Icon of the remote ability.|
......@@ -403,7 +403,7 @@ A formatted phone number is a standard numeric string, for example, 555 0100.
**Example**
```js
call.formatPhoneNumber("138xxxxxxxx",{
call.formatPhoneNumber("138xxxxxxxx", {
countryCode: "CN"
}, (err, data) => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`);
......@@ -796,7 +796,7 @@ call.reject(rejectMessageOptions, (err, data) => {
## call.reject<sup>7+</sup>
reject(callId: number, callback: AsyncCallback<void\>): <void\>
reject(callId: number, callback: AsyncCallback\<void>): void
Rejects a call based on the specified call ID. This API uses an asynchronous callback to return the result.
......@@ -811,16 +811,11 @@ This is a system API.
| callId | number | Yes | Call ID. You can obtain the value by subscribing to **callDetailsChange** events.|
| callback | AsyncCallback&lt;void&gt; | Yes | Callback used to return the result. |
**Return value**
| Type | Description |
| ------------------- | --------------------------- |
| Promise&lt;void&gt; | Promise used to return the result.|
**Example**
```js
call.reject(1, (error, data) => {
call.reject(1, (err, data) => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`);
});
```
......@@ -1621,8 +1616,8 @@ This is a system API.
**Example**
```js
call.on('callDetailsChange', (err, data) => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`);
call.on('callDetailsChange', data => {
console.log(`callback: data->${JSON.stringify(data)}`);
});
```
......@@ -1646,8 +1641,8 @@ This is a system API.
**Example**
```js
call.on('callEventChange', (err, data) => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`);
call.on('callEventChange', data => {
console.log(`callback: data->${JSON.stringify(data)}`);
});
```
......@@ -1671,8 +1666,8 @@ This is a system API.
**Example**
```js
call.on('callDisconnectedCause', (err, data) => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`);
call.on('callDisconnectedCause', data => {
console.log(`callback: data->${JSON.stringify(data)}`);
});
```
......@@ -1696,8 +1691,8 @@ This is a system API.
**Example**
```js
call.on('mmiCodeResult', (err, data) => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`);
call.on('mmiCodeResult', data => {
console.log(`callback: data->${JSON.stringify(data)}`);
});
```
......@@ -1721,8 +1716,8 @@ This is a system API.
**Example**
```js
call.off('callDetailsChange', (err, data) => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`);
call.off('callDetailsChange', data => {
console.log(`callback: data->${JSON.stringify(data)}`);
});
```
......@@ -1746,8 +1741,8 @@ This is a system API.
**Example**
```js
call.off('callEventChange', (err, data) => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`);
call.off('callEventChange', data => {
console.log(`callback: data->${JSON.stringify(data)}`);
});
```
......@@ -1771,8 +1766,8 @@ This is a system API.
**Example**
```js
call.off('callDisconnectedCause', (err, data) => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`);
call.off('callDisconnectedCause', data => {
console.log(`callback: data->${JSON.stringify(data)}`);
});
```
......@@ -1796,8 +1791,8 @@ This is a system API.
**Example**
```js
call.off('mmiCodeResult', (err, data) => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`);
call.off('mmiCodeResult', data => {
console.log(`callback: data->${JSON.stringify(data)}`);
});
```
......@@ -2386,7 +2381,7 @@ This is a system API.
let audioDeviceOptions={
bluetoothAddress: "IEEE 802-2014"
}
call.setAudioDevice(1, audioDeviceOptions, (err, value) => {
call.setAudioDevice(1, audioDeviceOptions, (err, data) => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`);
});
```
......
# Configuration Policy
The configuration policy provides the capability of obtaining the custom configuration directory and file path based on the predefined custom configuration level.
The **configPolicy** module provides APIs for obtaining the custom configuration directory and file path based on the predefined custom configuration level.
> **NOTE**
>
> The initial APIs of this module are supported since API version 8. Newly added APIs will be marked with a superscript to indicate their earliest API version.
>
> The APIs of this module are system APIs and cannot be called by third-party applications.
> The APIs provided by this module are system APIs.
## Modules to Import
......@@ -32,7 +32,7 @@ For example, if the **config.xml** file is stored in **/system/etc/config.xml**
**Example**
```js
configPolicy.getOneCfgFile('etc/config.xml', (error, value) => {
if (error == undefined) {
if (error == null) {
console.log("value is " + value);
} else {
console.log("error occurs "+ error);
......@@ -73,7 +73,8 @@ Obtains the path of a configuration file with the specified name and highest pri
getCfgFiles(relPath: string, callback: AsyncCallback&lt;Array&lt;string&gt;&gt;)
Obtains all configuration files with the specified name and lists them in ascending order of priority. This API uses an asynchronous callback to return the result. For example, if the **config.xml** file is stored in **/system/etc/config.xml** and **/sys_pod/etc/config.xml**, then **/system/etc/config.xml, /sys_pod/etc/config.xml** is returned.
Obtains a list of configuration files with the specified name, sorted in ascending order of priority. This API uses an asynchronous callback to return the result.
For example, if the **config.xml** file is stored in **/system/etc/config.xml** and **/sys_pod/etc/config.xml** (in ascending order of priority), then **/system/etc/config.xml, /sys_pod/etc/config.xml** is returned.
**System capability**: SystemCapability.Customization.ConfigPolicy
......@@ -86,7 +87,7 @@ Obtains all configuration files with the specified name and lists them in ascend
**Example**
```js
configPolicy.getCfgFiles('etc/config.xml', (error, value) => {
if (error == undefined) {
if (error == null) {
console.log("value is " + value);
} else {
console.log("error occurs "+ error);
......@@ -99,7 +100,7 @@ Obtains all configuration files with the specified name and lists them in ascend
getCfgFiles(relPath: string): Promise&lt;Array&lt;string&gt;&gt;
Obtains all configuration files with the specified name and lists them in ascending order of priority. This API uses a promise to return the result.
Obtains a list of configuration files with the specified name, sorted in ascending order of priority. This API uses a promise to return the result.
**System capability**: SystemCapability.Customization.ConfigPolicy
......@@ -127,7 +128,7 @@ Obtains all configuration files with the specified name and lists them in ascend
getCfgDirList(callback: AsyncCallback&lt;Array&lt;string&gt;&gt;)
Obtains the configuration level directory list. This API uses an asynchronous callback to return the result.
Obtains the list of configuration level directories. This API uses an asynchronous callback to return the result.
**System capability**: SystemCapability.Customization.ConfigPolicy
......@@ -139,7 +140,7 @@ Obtains the configuration level directory list. This API uses an asynchronous ca
**Example**
```js
configPolicy.getCfgDirList((error, value) => {
if (error == undefined) {
if (error == null) {
console.log("value is " + value);
} else {
console.log("error occurs "+ error);
......@@ -152,7 +153,7 @@ Obtains the configuration level directory list. This API uses an asynchronous ca
getCfgDirList(): Promise&lt;Array&lt;string&gt;&gt;
Obtains the configuration level directory list. This API uses a promise to return the result.
Obtains the list of configuration level directories. This API uses a promise to return the result.
**System capability**: SystemCapability.Customization.ConfigPolicy
......
# Screen Hopping
The Screen Hopping module enables two or more networked devices to share the keyboard and mouse for collaborative operations.
> **Description**
>
> The initial APIs of this module are supported since API version 9. Newly added APIs will be marked with a superscript to indicate their earliest API version.
## Modules to Import
```js
import inputDeviceCooperate from '@ohos.multimodalInput.inputDeviceCooperate'
```
## inputDeviceCooperate.enable
enable(enable: boolean, callback: AsyncCallback&lt;void&gt;): void
Specifies whether to enable screen hopping. This API uses an asynchronous callback to return the result.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory | Description |
| -------- | ------------------------- | ---- | --------------------------- |
| enable | boolean | Yes | Whether to enable screen hopping.|
| callback | AsyncCallback&lt;void&gt; | Yes |Callback used to return the result. |
**Example**
```js
try {
inputDeviceCooperate.enable(true, (error) => {
if (error) {
console.log(`Keyboard mouse crossing enable failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
return;
}
console.log(`Keyboard mouse crossing enable success.`);
});
} catch (error) {
console.log(`Keyboard mouse crossing enable failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## inputDeviceCooperate.enable
enable(enable: boolean): Promise&lt;void&gt;
Specifies whether to enable screen hopping. This API uses a promise to return the result.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory | Description |
| --------- | ------- | ---- | ------------------------------------------------------------------- |
| enable | boolean | Yes | Whether to enable screen hopping. |
**Return value**
| Name | Description |
| ------------------- | ------------------------------- |
| Promise&lt;void&gt; | Promise used to return the result. |
**Example**
```js
try {
inputDeviceCooperate.enable(true).then(() => {
console.log(`Keyboard mouse crossing enable success.`);
}, (error) => {
console.log(`Keyboard mouse crossing enable failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
});
} catch (error) {
console.log(`Keyboard mouse crossing enable failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## inputDeviceCooperate.start
start(sinkDeviceDescriptor: string, srcInputDeviceId: number, callback: AsyncCallback\<void>): void
Starts screen hopping. This API uses an asynchronous callback to return the result.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory | Description |
| -------- | ---------------------------- | ---- | ---------------------------- |
| sinkDeviceDescriptor | string | Yes | Descriptor of the target device for screen hopping. |
| srcInputDeviceId | number | Yes | ID of the target device for screen hopping. |
| callback | AsyncCallback\<void> | Yes | Callback used to return the result.|
**Error codes**
For details about the error codes, see [Screen Hopping Error Codes](../errorcodes/errorcodes-multimodalinput.md).
| ID| Error Message|
| -------- | ---------------------------------------- |
| 4400001 | This error code is reported if an invalid device descriptor is passed to the screen hopping API. |
| 4400002 | This error code is reported if the screen hopping status is abnormal when the screen hopping API is called. |
**Example**
```js
try {
inputDeviceCooperate.start(sinkDeviceDescriptor, srcInputDeviceId, (error) => {
if (error) {
console.log(`Start Keyboard mouse crossing failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
return;
}
console.log(`Start Keyboard mouse crossing success.`);
});
} catch (error) {
console.log(`Start Keyboard mouse crossing failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## inputDeviceCooperate.start
start(sinkDeviceDescriptor: string, srcInputDeviceId: number): Promise\<void>
Starts screen hopping. This API uses a promise to return the result.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory | Description |
| -------- | ---------------------------- | ---- | ---------------------------- |
| sinkDeviceDescriptor | string | Yes | Descriptor of the target device for screen hopping. |
| srcInputDeviceId | number | Yes | ID of the target device for screen hopping. |
**Return value**
| Name | Description |
| ---------------------- | ------------------------------- |
| Promise\<void> | Promise used to return the result. |
**Error codes**
For details about the error codes, see [Screen Hopping Error Codes](../errorcodes/errorcodes-multimodalinput.md).
| ID| Error Message|
| -------- | ---------------------------------------- |
| 4400001 | This error code is reported if an invalid device descriptor is passed to the screen hopping API. |
| 4400002 | This error code is reported if the screen hopping status is abnormal when the screen hopping API is called. |
**Example**
```js
try {
inputDeviceCooperate.start(sinkDeviceDescriptor, srcInputDeviceId).then(() => {
console.log(`Start Keyboard mouse crossing success.`);
}, (error) => {
console.log(`Start Keyboard mouse crossing failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
});
} catch (error) {
console.log(`Start Keyboard mouse crossing failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## inputDeviceCooperate.stop
stop(callback: AsyncCallback\<void>): void
Stops screen hopping. This API uses an asynchronous callback to return the result.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory | Description |
| -------- | ---------------------------- | ---- | ---------------------------- |
| callback | AsyncCallback\<void> | Yes | Callback used to return the result. |
**Example**
```js
try {
inputDeviceCooperate.stop((error) => {
if (error) {
console.log(`Stop Keyboard mouse crossing failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
return;
}
console.log(`Stop Keyboard mouse crossing success.`);
});
} catch (error) {
console.log(`Stop Keyboard mouse crossing failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## inputDeviceCooperate.stop
stop(): Promise\<void>
Stops screen hopping. This API uses a promise to return the result.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Description |
| -------- | ---------------------------- |
| Promise\<void> | Promise used to return the result. |
**Example**
```js
try {
inputDeviceCooperate.stop().then(() => {
console.log(`Stop Keyboard mouse crossing success.`);
}, (error) => {
console.log(`Stop Keyboard mouse crossing failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
});
} catch (error) {
console.log(`Stop Keyboard mouse crossing failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## inputDeviceCooperate.getState
getState(deviceDescriptor: string, callback: AsyncCallback<{ state: boolean }>): void
Checks whether screen hopping is enabled. This API uses an asynchronous callback to return the result.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory | Description |
| -------- | --------- | ---- | ---------------------------- |
| deviceDescriptor | string | Yes | Descriptor of the target device for screen hopping. |
| callback | AsyncCallback<{ state: boolean }> | Yes | Callback used to return the result. |
**Example**
```js
try {
inputDeviceCooperate.getState(deviceDescriptor, (error, data) => {
if (error) {
console.log(`Get the status failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
return;
}
console.log(`Get the status success, data: ${JSON.stringify(data)}`);
});
} catch (error) {
console.log(`Get the status failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## inputDeviceCooperate.getState
getState(deviceDescriptor: string): Promise<{ state: boolean }>
Checks whether screen hopping is enabled. This API uses a promise to return the result.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory | Description |
| -------- | --------- | ---- | ---------------------------- |
| deviceDescriptor | string | Yes | Descriptor of the target device for screen hopping. |
**Return value**
| Name | Description |
| ------------------- | ------------------------------- |
| Promise<{ state: boolean }>| Promise used to return the result. |
**Example**
```js
try {
inputDeviceCooperate.getState(deviceDescriptor).then((data) => {
console.log(`Get the status success, data: ${JSON.stringify(data)}`);
}, (error) => {
console.log(`Get the status failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
});
} catch (error) {
console.log(`Get the status failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## on('cooperation')
on(type: 'cooperation', callback: AsyncCallback<{ deviceDescriptor: string, eventMsg: EventMsg }>): void
Enables listening for screen hopping events.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory| Description |
| -------- | ---------------------------- | ---- | ---------------------------- |
| type | string | Yes | Event type. The value is **cooperation**. |
| callback | AsyncCallback<{ deviceDescriptor: string, eventMsg: [EventMsg](#eventmsg) }> | Yes | Callback used to return the result. |
**Example**
```js
try {
inputDeviceCooperate.on('cooperation', (data) => {
console.log(`Keyboard mouse crossing event: ${JSON.stringify(data)}`);
});
} catch (err) {
console.log(`Register failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## off('cooperation')
off(type: 'cooperation', callback?: AsyncCallback\<void>): void
Disables listening for screen hopping events.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory | Description |
| -------- | ---------------------------- | ---- | ---------------------------- |
| type | string | Yes | Event type. The value is **cooperation**. |
| callback | AsyncCallback<void> | No | Callback to be unregistered. If this parameter is not specified, all callbacks registered by the current application will be unregistered.|
**Example**
```js
// Unregister a single callback.
callback: function(event) {
console.log(`Keyboard mouse crossing event: ${JSON.stringify(event)}`);
return false;
},
try {
inputDeviceCooperate.on('cooperation', this.callback);
inputDeviceCooperate.off("cooperation", this.callback);
} catch (error) {
console.log(`Execute failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
```js
// Unregister all callbacks.
callback: function(event) {
console.log(`Keyboard mouse crossing event: ${JSON.stringify(event)}`);
return false;
},
try {
inputDeviceCooperate.on('cooperation', this.callback);
inputDeviceCooperate.off("cooperation");
} catch (error) {
console.log(`Execute failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## EventMsg
Enumerates screen hopping event.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
| Name | Value | Description |
| -------- | --------- | ----------------- |
| MSG_COOPERATE_INFO_START | 200 | Screen hopping starts. |
| MSG_COOPERATE_INFO_SUCCESS | 201 | Screen hopping succeeds. |
| MSG_COOPERATE_INFO_FAIL | 202 | Screen hopping fails. |
| MSG_COOPERATE_STATE_ON | 500 | Screen hopping is enabled. |
| MSG_COOPERATE_STATE_OFF | 501 | Screen hopping is disabled. |
此差异已折叠。
# User File Extension Info
The **fileExtensionInfo** module defines attributes in **RootInfo** and **FileInfo** of the user file access and management module.
>**NOTE**
>
>- The initial APIs of this module are supported since API version 9. Newly added APIs will be marked with a superscript to indicate their earliest API version.
>- The APIs provided by this module are system APIs.
## Modules to Import
```js
import fileExtensionInfo from '@ohos.fileExtensionInfo';
```
## fileExtensionInfo.DeviceType
Defines the values of **deviceType** used in **RootInfo**.
**System capability**: SystemCapability.FileManagement.UserFileService
| Name| Value| Description|
| ----- | ------ | ------ |
| DEVICE_LOCAL_DISK | 1 | Local disk.|
| DEVICE_SHARED_DISK | 2 | Shared disk.|
| DEVICE_SHARED_TERMINAL | 3 | Distributed network device.|
| DEVICE_NETWORK_NEIGHBORHOODS | 4 | Network neighbor device.|
| DEVICE_EXTERNAL_MTP | 5 | MTP device.|
| DEVICE_EXTERNAL_USB | 6 | USB device.|
| DEVICE_EXTERNAL_CLOUD | 7 | Cloud disk.|
## fileExtensionInfo.DeviceFlag
Defines the values of **deviceFlags** used in **RootInfo**. **deviceFlags** is used to determine whether a capability is available through the AND operation.
**System capability**: SystemCapability.FileManagement.UserFileService
### Attributes
| Name| Type | Value| Readable| Writable| Description |
| ------ | ------ | ---- | ---- | ---- | -------- |
| SUPPORTS_READ | number | 0b1 | Yes | No | Read support.|
| SUPPORTS_WRITE | number | 0b10 | Yes | No | Write support.|
## fileExtensionInfo.DocumentFlag
Defines the values of **mode** used in **FileInfo**.
**System capability**: SystemCapability.FileManagement.UserFileService
### Attributes
| Name| Type | Value| Readable| Writable| Description |
| ------ | ------ | ---- | ---- | ---- | -------- |
| REPRESENTS_FILE | number | 0b1 | Yes | No | File.|
| REPRESENTS_DIR | number | 0b10 | Yes | No | Directory.|
| SUPPORTS_READ | number | 0b100 | Yes | No | Read support.|
| SUPPORTS_WRITE | number | 0b1000 | Yes | No | Write support.|
......@@ -1840,10 +1840,10 @@ Defines the network status.
| Name | Value | Description |
| ----------------------------- | ---- | -------------------------- |
| REG_STATE_NO_SERVICE | 0 | The device cannot use any service. |
| REG_STATE_IN_SERVICE | 1 | The device can use services normally. |
| REG_STATE_EMERGENCY_CALL_ONLY | 2 | The device can use only the emergency call service.|
| REG_STATE_POWER_OFF | 3 | The cellular radio service is disabled. |
| REG_STATE_NO_SERVICE | 0 | The device cannot use any services, including data, SMS, and call services. |
| REG_STATE_IN_SERVICE | 1 | The device can use services properly, including data, SMS, and call services. |
| REG_STATE_EMERGENCY_CALL_ONLY | 2 | The device can use only the emergency call service. |
| REG_STATE_POWER_OFF | 3 | The device cannot communicate with the network because the cellular radio service is disabled or the modem is powered off. |
## NsaState
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册