提交 f77cc6f7 编写于 作者: Y yuyaozhi 提交者: Gitee

Merge branch 'master' of gitee.com:openharmony/docs into master

Signed-off-by: Nyuyaozhi <yuyaozhi@huawei.com>
...@@ -150,7 +150,7 @@ Currently, the OpenHarmony community supports 17 types of development boards, wh ...@@ -150,7 +150,7 @@ Currently, the OpenHarmony community supports 17 types of development boards, wh
| System Type| Board Model| Chip Model| <div style="width:200pt">Function Description and Use Case</div> | Application Scenario| Code Repository | | System Type| Board Model| Chip Model| <div style="width:200pt">Function Description and Use Case</div> | Application Scenario| Code Repository |
| -------- | -------- | -------- | -------- | -------- | -------- | | -------- | -------- | -------- | -------- | -------- | -------- |
| Standard system| Runhe HH-SCDAYU200| RK3568 | <div style="width:200pt">Function description:<br>Bolstered by the Rockchip RK3568, the HH-SCDAYU200 development board integrates the dual-core GPU and efficient NPU. Its quad-core 64-bit Cortex-A55 processor uses the advanced 22 nm fabrication process and is clocked at up to 2.0 GHz. The board is packed with Bluetooth, Wi-Fi, audio, video, and camera features, with a wide range of expansion ports to accommodate various video input and outputs. It comes with dual GE auto-sensing RJ45 ports, so it can be used in multi-connectivity products, such as network video recorders (NVRs) and industrial gateways.<br>Use case:<br>[DAYU200 Use Case](device-dev/porting/porting-dayu200-on_standard-demo.md)</div> | Entertainment, easy travel, and smart home, such as kitchen hoods, ovens, and treadmills.| [device_soc_rockchip](https://gitee.com/openharmony/device_soc_rockchip)<br>[device_board_hihope](https://gitee.com/openharmony/device_board_hihope)<br>[vendor_hihope](https://gitee.com/openharmony/vendor_hihope) <br> | | Standard system| Runhe HH-SCDAYU200| RK3568 | <div style="width:200pt">Function description:<br>Bolstered by the Rockchip RK3568, the HH-SCDAYU200 development board integrates the dual-core GPU and efficient NPU. Its quad-core 64-bit Cortex-A55 processor uses the advanced 22 nm fabrication process and is clocked at up to 2.0 GHz. The board is packed with Bluetooth, Wi-Fi, audio, video, and camera features, with a wide range of expansion ports to accommodate various video input and outputs. It comes with dual GE auto-sensing RJ45 ports, so it can be used in multi-connectivity products, such as network video recorders (NVRs) and industrial gateways.</div> | Entertainment, easy travel, and smart home, such as kitchen hoods, ovens, and treadmills.| [device_soc_rockchip](https://gitee.com/openharmony/device_soc_rockchip)<br>[device_board_hihope](https://gitee.com/openharmony/device_board_hihope)<br>[vendor_hihope](https://gitee.com/openharmony/vendor_hihope) <br> |
| Small system| Hispark_Taurus | Hi3516DV300 | <div style="width:200pt">Function Description:<br>Hi3516D V300 is the next-generation system on chip (SoC) for smart HD IP cameras. It integrates the next-generation image signal processor (ISP), H.265 video compression encoder, and high-performance NNIE engine, and delivers high performance in terms of low bit rate, high image quality, intelligent processing and analysis, and low power consumption.</div> | Smart device with screens, such as refrigerators with screens and head units.| [device_soc_hisilicon](https://gitee.com/openharmony/device_soc_hisilicon)<br>[device_board_hisilicon](https://gitee.com/openharmony/device_board_hisilicon)<br>[vendor_hisilicon](https://gitee.com/openharmony/vendor_hisilicon) <br> | | Small system| Hispark_Taurus | Hi3516DV300 | <div style="width:200pt">Function Description:<br>Hi3516D V300 is the next-generation system on chip (SoC) for smart HD IP cameras. It integrates the next-generation image signal processor (ISP), H.265 video compression encoder, and high-performance NNIE engine, and delivers high performance in terms of low bit rate, high image quality, intelligent processing and analysis, and low power consumption.</div> | Smart device with screens, such as refrigerators with screens and head units.| [device_soc_hisilicon](https://gitee.com/openharmony/device_soc_hisilicon)<br>[device_board_hisilicon](https://gitee.com/openharmony/device_board_hisilicon)<br>[vendor_hisilicon](https://gitee.com/openharmony/vendor_hisilicon) <br> |
| Mini system| Multi-modal V200Z-R | BES2600 | <div style="width:200pt">Function description:<br>The multi-modal V200Z-R development board is a high-performance, multi-functional, and cost-effective AIoT SoC powered by the BES2600WM chip of Bestechnic. It integrates a quad-core ARM processor with a frequency of up to 1 GHz as well as dual-mode Wi-Fi and dual-mode Bluetooth. The board supports the 802.11 a/b/g/n/ and BT/BLE 5.2 standards. It is able to accommodate RAM of up to 42 MB and flash memory of up to 32 MB, and supports the MIPI display serial interface (DSI) and camera serial interface (CSI). It is applicable to various AIoT multi-modal VUI and GUI interaction scenarios.<br>Use case:<br>[Multi-modal V200Z-R Use Case](device-dev/porting/porting-bes2600w-on-minisystem-display-demo.md)</div> | Smart hardware, and smart devices with screens, such as speakers and watches.| [device_soc_bestechnic](https://gitee.com/openharmony/device_soc_bestechnic)<br>[device_board_fnlink](https://gitee.com/openharmony/device_board_fnlink)<br>[vendor_bestechnic](https://gitee.com/openharmony/vendor_bestechnic) <br> | | Mini system| Multi-modal V200Z-R | BES2600 | <div style="width:200pt">Function description:<br>The multi-modal V200Z-R development board is a high-performance, multi-functional, and cost-effective AIoT SoC powered by the BES2600WM chip of Bestechnic. It integrates a quad-core ARM processor with a frequency of up to 1 GHz as well as dual-mode Wi-Fi and dual-mode Bluetooth. The board supports the 802.11 a/b/g/n/ and BT/BLE 5.2 standards. It is able to accommodate RAM of up to 42 MB and flash memory of up to 32 MB, and supports the MIPI display serial interface (DSI) and camera serial interface (CSI). It is applicable to various AIoT multi-modal VUI and GUI interaction scenarios.<br>Use case:<br>[Multi-modal V200Z-R Use Case](device-dev/porting/porting-bes2600w-on-minisystem-display-demo.md)</div> | Smart hardware, and smart devices with screens, such as speakers and watches.| [device_soc_bestechnic](https://gitee.com/openharmony/device_soc_bestechnic)<br>[device_board_fnlink](https://gitee.com/openharmony/device_board_fnlink)<br>[vendor_bestechnic](https://gitee.com/openharmony/vendor_bestechnic) <br> |
......
...@@ -7,7 +7,7 @@ To ensure successful communications between the client and server, interfaces re ...@@ -7,7 +7,7 @@ To ensure successful communications between the client and server, interfaces re
![IDL-interface-description](./figures/IDL-interface-description.png) ![IDL-interface-description](./figures/IDL-interface-description.png)
IDL provides the following functions: **IDL provides the following functions:**
- Declares interfaces provided by system services for external systems, and based on the interface declaration, generates C, C++, JS, or TS code for inter-process communication (IPC) or remote procedure call (RPC) proxies and stubs during compilation. - Declares interfaces provided by system services for external systems, and based on the interface declaration, generates C, C++, JS, or TS code for inter-process communication (IPC) or remote procedure call (RPC) proxies and stubs during compilation.
...@@ -17,7 +17,7 @@ IDL provides the following functions: ...@@ -17,7 +17,7 @@ IDL provides the following functions:
![IPC-RPC-communication-model](./figures/IPC-RPC-communication-model.png) ![IPC-RPC-communication-model](./figures/IPC-RPC-communication-model.png)
IDL has the following advantages: **IDL has the following advantages:**
- Services are defined in the form of interfaces in IDL. Therefore, you do not need to focus on implementation details. - Services are defined in the form of interfaces in IDL. Therefore, you do not need to focus on implementation details.
...@@ -433,7 +433,7 @@ export default { ...@@ -433,7 +433,7 @@ export default {
console.log('ServiceAbility want:' + JSON.stringify(want)); console.log('ServiceAbility want:' + JSON.stringify(want));
console.log('ServiceAbility want name:' + want.bundleName) console.log('ServiceAbility want name:' + want.bundleName)
} catch(err) { } catch(err) {
console.log("ServiceAbility error:" + err) console.log('ServiceAbility error:' + err)
} }
console.info('ServiceAbility onConnect end'); console.info('ServiceAbility onConnect end');
return new IdlTestImp('connect'); return new IdlTestImp('connect');
...@@ -455,13 +455,13 @@ import featureAbility from '@ohos.ability.featureAbility'; ...@@ -455,13 +455,13 @@ import featureAbility from '@ohos.ability.featureAbility';
function callbackTestIntTransaction(result: number, ret: number): void { function callbackTestIntTransaction(result: number, ret: number): void {
if (result == 0 && ret == 124) { if (result == 0 && ret == 124) {
console.log("case 1 success "); console.log('case 1 success');
} }
} }
function callbackTestStringTransaction(result: number): void { function callbackTestStringTransaction(result: number): void {
if (result == 0) { if (result == 0) {
console.log("case 2 success "); console.log('case 2 success');
} }
} }
...@@ -472,17 +472,17 @@ var onAbilityConnectDone = { ...@@ -472,17 +472,17 @@ var onAbilityConnectDone = {
testProxy.testStringTransaction('hello', callbackTestStringTransaction); testProxy.testStringTransaction('hello', callbackTestStringTransaction);
}, },
onDisconnect:function (elementName) { onDisconnect:function (elementName) {
console.log("onDisconnectService onDisconnect"); console.log('onDisconnectService onDisconnect');
}, },
onFailed:function (code) { onFailed:function (code) {
console.log("onDisconnectService onFailed"); console.log('onDisconnectService onFailed');
} }
}; };
function connectAbility: void { function connectAbility: void {
let want = { let want = {
"bundleName":"com.example.myapplicationidl", bundleName: 'com.example.myapplicationidl',
"abilityName": "com.example.myapplicationidl.ServiceAbility" abilityName: 'com.example.myapplicationidl.ServiceAbility'
}; };
let connectionId = -1; let connectionId = -1;
connectionId = featureAbility.connectAbility(want, onAbilityConnectDone); connectionId = featureAbility.connectAbility(want, onAbilityConnectDone);
...@@ -495,7 +495,7 @@ function connectAbility: void { ...@@ -495,7 +495,7 @@ function connectAbility: void {
You can send a class from one process to another through IPC interfaces. However, you must ensure that the peer can use the code of this class and this class supports the **marshalling** and **unmarshalling** methods. OpenHarmony uses **marshalling** and **unmarshalling** to serialize and deserialize objects into objects that can be identified by each process. You can send a class from one process to another through IPC interfaces. However, you must ensure that the peer can use the code of this class and this class supports the **marshalling** and **unmarshalling** methods. OpenHarmony uses **marshalling** and **unmarshalling** to serialize and deserialize objects into objects that can be identified by each process.
To create a class that supports the sequenceable type, perform the following operations: **To create a class that supports the sequenceable type, perform the following operations:**
1. Implement the **marshalling** method, which obtains the current state of the object and serializes the object into a **Parcel** object. 1. Implement the **marshalling** method, which obtains the current state of the object and serializes the object into a **Parcel** object.
2. Implement the **unmarshalling** method, which deserializes the object from a **Parcel** object. 2. Implement the **unmarshalling** method, which deserializes the object from a **Parcel** object.
...@@ -595,7 +595,7 @@ export default class IdlTestServiceProxy implements IIdlTestService { ...@@ -595,7 +595,7 @@ export default class IdlTestServiceProxy implements IIdlTestService {
let _reply = new rpc.MessageParcel(); let _reply = new rpc.MessageParcel();
_data.writeInt(data); _data.writeInt(data);
this.proxy.sendRequest(IdlTestServiceProxy.COMMAND_TEST_INT_TRANSACTION, _data, _reply, _option).then(function(result) { this.proxy.sendRequest(IdlTestServiceProxy.COMMAND_TEST_INT_TRANSACTION, _data, _reply, _option).then(function(result) {
if (result.errCode === 0) { if (result.errCode == 0) {
let _errCode = result.reply.readInt(); let _errCode = result.reply.readInt();
if (_errCode != 0) { if (_errCode != 0) {
let _returnValue = undefined; let _returnValue = undefined;
...@@ -605,7 +605,7 @@ export default class IdlTestServiceProxy implements IIdlTestService { ...@@ -605,7 +605,7 @@ export default class IdlTestServiceProxy implements IIdlTestService {
let _returnValue = result.reply.readInt(); let _returnValue = result.reply.readInt();
callback(_errCode, _returnValue); callback(_errCode, _returnValue);
} else { } else {
console.log("sendRequest failed, errCode: " + result.errCode); console.log('sendRequest failed, errCode: ' + result.errCode);
} }
}) })
} }
...@@ -617,11 +617,11 @@ export default class IdlTestServiceProxy implements IIdlTestService { ...@@ -617,11 +617,11 @@ export default class IdlTestServiceProxy implements IIdlTestService {
let _reply = new rpc.MessageParcel(); let _reply = new rpc.MessageParcel();
_data.writeString(data); _data.writeString(data);
this.proxy.sendRequest(IdlTestServiceProxy.COMMAND_TEST_STRING_TRANSACTION, _data, _reply, _option).then(function(result) { this.proxy.sendRequest(IdlTestServiceProxy.COMMAND_TEST_STRING_TRANSACTION, _data, _reply, _option).then(function(result) {
if (result.errCode === 0) { if (result.errCode == 0) {
let _errCode = result.reply.readInt(); let _errCode = result.reply.readInt();
callback(_errCode); callback(_errCode);
} else { } else {
console.log("sendRequest failed, errCode: " + result.errCode); console.log('sendRequest failed, errCode: ' + result.errCode);
} }
}) })
} }
...@@ -644,12 +644,12 @@ import nativeMgr from 'nativeManager'; ...@@ -644,12 +644,12 @@ import nativeMgr from 'nativeManager';
function testIntTransactionCallback(errCode: number, returnValue: number) function testIntTransactionCallback(errCode: number, returnValue: number)
{ {
console.log("errCode: " + errCode + " returnValue: " + returnValue); console.log('errCode: ' + errCode + ' returnValue: ' + returnValue);
} }
function testStringTransactionCallback(errCode: number) function testStringTransactionCallback(errCode: number)
{ {
console.log("errCode: " + errCode); console.log('errCode: ' + errCode);
} }
function jsProxyTriggerCppStub() function jsProxyTriggerCppStub()
...@@ -660,6 +660,6 @@ function jsProxyTriggerCppStub() ...@@ -660,6 +660,6 @@ function jsProxyTriggerCppStub()
tsProxy.testIntTransaction(10, testIntTransactionCallback); tsProxy.testIntTransaction(10, testIntTransactionCallback);
// Call testStringTransaction. // Call testStringTransaction.
tsProxy.testStringTransaction("test", testIntTransactionCallback); tsProxy.testStringTransaction('test', testIntTransactionCallback);
} }
``` ```
...@@ -8,14 +8,13 @@ ...@@ -8,14 +8,13 @@
- Quick Start - Quick Start
- Getting Started - Getting Started
- [Preparations](quick-start/start-overview.md) - [Preparations](quick-start/start-overview.md)
- [Getting Started with eTS in Stage Model](quick-start/start-with-ets-stage.md) - [Getting Started with ArkTS in Stage Model](quick-start/start-with-ets-stage.md)
- [Getting Started with eTS in FA Model](quick-start/start-with-ets-fa.md) - [Getting Started with ArkTS in FA Model](quick-start/start-with-ets-fa.md)
- [Getting Started with JavaScript in FA Model](quick-start/start-with-js-fa.md) - [Getting Started with JavaScript in FA Model](quick-start/start-with-js-fa.md)
- Development Fundamentals - Development Fundamentals
- [Application Package Structure Configuration File (FA Model)](quick-start/package-structure.md) - [Application Package Structure Configuration File (FA Model)](quick-start/package-structure.md)
- [Application Package Structure Configuration File (Stage Model)](quick-start/stage-structure.md) - [Application Package Structure Configuration File (Stage Model)](quick-start/stage-structure.md)
- [SysCap](quick-start/syscap.md) - [SysCap](quick-start/syscap.md)
- [HarmonyAppProvision Configuration File](quick-start/app-provision-structure.md)
- Development - Development
- [Ability Development](ability/Readme-EN.md) - [Ability Development](ability/Readme-EN.md)
......
# Ability Development # Ability Development
- [Ability Framework Overview](ability-brief.md) - [Ability Framework Overview](ability-brief.md)
- [Context Usage](context-userguide.md) - [Context Usage](context-userguide.md)
- FA Model - FA Model
...@@ -19,5 +20,3 @@ ...@@ -19,5 +20,3 @@
- [Ability Assistant Usage](ability-assistant-guidelines.md) - [Ability Assistant Usage](ability-assistant-guidelines.md)
- [ContinuationManager Development](continuationmanager.md) - [ContinuationManager Development](continuationmanager.md)
- [Test Framework Usage](ability-delegator.md) - [Test Framework Usage](ability-delegator.md)
...@@ -14,20 +14,20 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -14,20 +14,20 @@ As the entry of the ability continuation capability, **continuationManager** is
## Available APIs ## Available APIs
| API | Description| | API | Description|
| ---------------------------------------------------------------------------------------------- | ----------- | | ---------------------------------------------------------------------------------------------- | ----------- |
| register(callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API does not involve any filter parameters and uses an asynchronous callback to return the result.| | registerContinuation(callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API does not involve any filter parameters and uses an asynchronous callback to return the result.|
| register(options: ContinuationExtraParams, callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API uses an asynchronous callback to return the result.| | registerContinuation(options: ContinuationExtraParams, callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API uses an asynchronous callback to return the result.|
| register(options?: ContinuationExtraParams): Promise\<number> | Registers the continuation management service and obtains a token. This API uses a promise to return the result.| | registerContinuation(options?: ContinuationExtraParams): Promise\<number> | Registers the continuation management service and obtains a token. This API uses a promise to return the result.|
| on(type: "deviceConnect", token: number, callback: Callback\<Array\<ContinuationResult>>): void | Subscribes to device connection events. This API uses an asynchronous callback to return the result.| | on(type: "deviceSelected", token: number, callback: Callback\<Array\<ContinuationResult>>): void | Subscribes to device connection events. This API uses an asynchronous callback to return the result.|
| on(type: "deviceDisconnect", token: number, callback: Callback\<Array\<string>>): void | Subscribes to device disconnection events. This API uses an asynchronous callback to return the result.| | on(type: "deviceUnselected", token: number, callback: Callback\<Array\<ContinuationResult>>): void | Subscribes to device disconnection events. This API uses an asynchronous callback to return the result.|
| off(type: "deviceConnect", token: number): void | Unsubscribes from device connection events.| | off(type: "deviceSelected", token: number): void | Unsubscribes from device connection events.|
| off(type: "deviceDisconnect", token: number): void | Unsubscribes from device disconnection events.| | off(type: "deviceUnselected", token: number): void | Unsubscribes from device disconnection events.|
| startDeviceManager(token: number, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API does not involve any filter parameters and uses an asynchronous callback to return the result.| | startContinuationDeviceManager(token: number, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API does not involve any filter parameters and uses an asynchronous callback to return the result.|
| startDeviceManager(token: number, options: ContinuationExtraParams, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API uses an asynchronous callback to return the result.| | startContinuationDeviceManager(token: number, options: ContinuationExtraParams, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API uses an asynchronous callback to return the result.|
| startDeviceManager(token: number, options?: ContinuationExtraParams): Promise\<void> | Starts the device selection module to show the list of available devices. This API uses a promise to return the result.| | startContinuationDeviceManager(token: number, options?: ContinuationExtraParams): Promise\<void> | Starts the device selection module to show the list of available devices. This API uses a promise to return the result.|
| updateConnectStatus(token: number, deviceId: string, status: DeviceConnectState, callback: AsyncCallback\<void>): void | Instructs the device selection module to update the device connection state. This API uses an asynchronous callback to return the result.| | updateContinuationState(token: number, deviceId: string, status: DeviceConnectState, callback: AsyncCallback\<void>): void | Instructs the device selection module to update the device connection state. This API uses an asynchronous callback to return the result.|
| updateConnectStatus(token: number, deviceId: string, status: DeviceConnectState): Promise\<void> | Instructs the device selection module to update the device connection state. This API uses a promise to return the result.| | updateContinuationState(token: number, deviceId: string, status: DeviceConnectState): Promise\<void> | Instructs the device selection module to update the device connection state. This API uses a promise to return the result.|
| unregister(token: number, callback: AsyncCallback\<void>): void | Deregisters the continuation management service. This API uses an asynchronous callback to return the result.| | unregisterContinuation(token: number, callback: AsyncCallback\<void>): void | Deregisters the continuation management service. This API uses an asynchronous callback to return the result.|
| unregister(token: number): Promise\<void> | Deregisters the continuation management service. This API uses a promise to return the result.| | unregisterContinuation(token: number): Promise\<void> | Deregisters the continuation management service. This API uses a promise to return the result.|
## How to Develop ## How to Develop
1. Import the **continuationManager** module. 1. Import the **continuationManager** module.
...@@ -36,7 +36,7 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -36,7 +36,7 @@ As the entry of the ability continuation capability, **continuationManager** is
import continuationManager from '@ohos.continuation.continuationManager'; import continuationManager from '@ohos.continuation.continuationManager';
``` ```
2. Apply for permissions required for cross-device continuation or collaboration operations. 2. Apply for the **DISTRIBUTED_DATASYNC** permission.
The permission application operation varies according to the ability model in use. In the FA mode, add the required permission in the `config.json` file, as follows: The permission application operation varies according to the ability model in use. In the FA mode, add the required permission in the `config.json` file, as follows:
...@@ -57,6 +57,7 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -57,6 +57,7 @@ As the entry of the ability continuation capability, **continuationManager** is
```ts ```ts
import abilityAccessCtrl from "@ohos.abilityAccessCtrl"; import abilityAccessCtrl from "@ohos.abilityAccessCtrl";
import bundle from '@ohos.bundle'; import bundle from '@ohos.bundle';
import featureAbility from '@ohos.ability.featureAbility';
async function requestPermission() { async function requestPermission() {
let permissions: Array<string> = [ let permissions: Array<string> = [
...@@ -124,7 +125,8 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -124,7 +125,8 @@ As the entry of the ability continuation capability, **continuationManager** is
// If the permission is not granted, call requestPermissionsFromUser to apply for the permission. // If the permission is not granted, call requestPermissionsFromUser to apply for the permission.
if (needGrantPermission) { if (needGrantPermission) {
try { try {
await globalThis.abilityContext.requestPermissionsFromUser(permissions); // globalThis.context is Ability.context, which must be assigned a value in the MainAbility.ts file in advance.
await globalThis.context.requestPermissionsFromUser(permissions);
} catch (err) { } catch (err) {
console.error('app permission request permissions error' + JSON.stringify(err)); console.error('app permission request permissions error' + JSON.stringify(err));
} }
...@@ -140,13 +142,16 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -140,13 +142,16 @@ As the entry of the ability continuation capability, **continuationManager** is
```ts ```ts
let token: number = -1; // Used to save the token returned after the registration. The token will be used when listening for device connection/disconnection events, starting the device selection module, and updating the device connection state. let token: number = -1; // Used to save the token returned after the registration. The token will be used when listening for device connection/disconnection events, starting the device selection module, and updating the device connection state.
try {
continuationManager.register().then((data) => { continuationManager.registerContinuation().then((data) => {
console.info('register finished, ' + JSON.stringify(data)); console.info('registerContinuation finished, ' + JSON.stringify(data));
token = data; // Obtain a token and assign a value to the token variable. token = data; // Obtain a token and assign a value to the token variable.
}).catch((err) => { }).catch((err) => {
console.error('register failed, cause: ' + JSON.stringify(err)); console.error('registerContinuation failed, cause: ' + JSON.stringify(err));
}); });
} catch (err) {
console.error('registerContinuation failed, cause: ' + JSON.stringify(err));
}
``` ```
4. Listen for the device connection/disconnection state. 4. Listen for the device connection/disconnection state.
...@@ -156,9 +161,10 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -156,9 +161,10 @@ As the entry of the ability continuation capability, **continuationManager** is
```ts ```ts
let remoteDeviceId: string = ""; // Used to save the information about the remote device selected by the user, which will be used for cross-device continuation or collaboration. let remoteDeviceId: string = ""; // Used to save the information about the remote device selected by the user, which will be used for cross-device continuation or collaboration.
try {
// The token parameter is the token obtained during the registration. // The token parameter is the token obtained during the registration.
continuationManager.on("deviceConnect", token, (continuationResults) => { continuationManager.on("deviceSelected", token, (continuationResults) => {
console.info('registerDeviceConnectCallback len: ' + continuationResults.length); console.info('registerDeviceSelectedCallback len: ' + continuationResults.length);
if (continuationResults.length <= 0) { if (continuationResults.length <= 0) {
console.info('no selected device'); console.info('no selected device');
return; return;
...@@ -171,13 +177,15 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -171,13 +177,15 @@ As the entry of the ability continuation capability, **continuationManager** is
bundleName: 'ohos.samples.continuationmanager', bundleName: 'ohos.samples.continuationmanager',
abilityName: 'MainAbility' abilityName: 'MainAbility'
}; };
// To initiate multi-device collaboration, you must obtain the ohos.permission.DISTRIBUTED_DATASYNC permission.
globalThis.abilityContext.startAbility(want).then((data) => { globalThis.abilityContext.startAbility(want).then((data) => {
console.info('StartRemoteAbility finished, ' + JSON.stringify(data)); console.info('StartRemoteAbility finished, ' + JSON.stringify(data));
}).catch((err) => { }).catch((err) => {
console.error('StartRemoteAbility failed, cause: ' + JSON.stringify(err)); console.error('StartRemoteAbility failed, cause: ' + JSON.stringify(err));
}); });
}); });
} catch (err) {
console.error('on failed, cause: ' + JSON.stringify(err));
}
``` ```
The preceding multi-device collaboration operation is performed across devices in the stage model. For details about this operation in the FA model, see [Page Ability Development](https://gitee.com/openharmony/docs/blob/master/en/application-dev/ability/fa-pageability.md). The preceding multi-device collaboration operation is performed across devices in the stage model. For details about this operation in the FA model, see [Page Ability Development](https://gitee.com/openharmony/docs/blob/master/en/application-dev/ability/fa-pageability.md).
...@@ -189,35 +197,43 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -189,35 +197,43 @@ As the entry of the ability continuation capability, **continuationManager** is
let deviceConnectStatus: continuationManager.DeviceConnectState = continuationManager.DeviceConnectState.CONNECTED; let deviceConnectStatus: continuationManager.DeviceConnectState = continuationManager.DeviceConnectState.CONNECTED;
// The token parameter is the token obtained during the registration, and the remoteDeviceId parameter is the remoteDeviceId obtained. // The token parameter is the token obtained during the registration, and the remoteDeviceId parameter is the remoteDeviceId obtained.
continuationManager.updateConnectStatus(token, remoteDeviceId, deviceConnectStatus).then((data) => { try {
console.info('updateConnectStatus finished, ' + JSON.stringify(data)); continuationManager.updateContinuationState(token, remoteDeviceId, deviceConnectStatus).then((data) => {
console.info('updateContinuationState finished, ' + JSON.stringify(data));
}).catch((err) => { }).catch((err) => {
console.error('updateConnectStatus failed, cause: ' + JSON.stringify(err)); console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
}); });
} catch (err) {
console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
}
``` ```
Listen for the device disconnection state so that the user can stop cross-device continuation or collaboration in time. The sample code is as follows: Listen for the device disconnection state so that the user can stop cross-device continuation or collaboration in time. The sample code is as follows:
```ts ```ts
try {
// The token parameter is the token obtained during the registration. // The token parameter is the token obtained during the registration.
continuationManager.on("deviceDisconnect", token, (deviceIds) => { continuationManager.on("deviceUnselected", token, (continuationResults) => {
console.info('onDeviceDisconnect len: ' + deviceIds.length); console.info('onDeviceUnselected len: ' + continuationResults.length);
if (deviceIds.length <= 0) { if (continuationResults.length <= 0) {
console.info('no unselected device'); console.info('no unselected device');
return; return;
} }
// Update the device connection state. // Update the device connection state.
let unselectedDeviceId: string = deviceIds[0]; // Assign the deviceId of the first deselected remote device to the unselectedDeviceId variable. let unselectedDeviceId: string = continuationResults[0].id; // Assign the deviceId of the first deselected remote device to the unselectedDeviceId variable.
let deviceConnectStatus: continuationManager.DeviceConnectState = continuationManager.DeviceConnectState.DISCONNECTING; // Device disconnected. let deviceConnectStatus: continuationManager.DeviceConnectState = continuationManager.DeviceConnectState.DISCONNECTING; // Device disconnected.
// The token parameter is the token obtained during the registration, and the unselectedDeviceId parameter is the unselectedDeviceId obtained. // The token parameter is the token obtained during the registration, and the unselectedDeviceId parameter is the unselectedDeviceId obtained.
continuationManager.updateConnectStatus(token, unselectedDeviceId, deviceConnectStatus).then((data) => { continuationManager.updateContinuationState(token, unselectedDeviceId, deviceConnectStatus).then((data) => {
console.info('updateConnectStatus finished, ' + JSON.stringify(data)); console.info('updateContinuationState finished, ' + JSON.stringify(data));
}).catch((err) => { }).catch((err) => {
console.error('updateConnectStatus failed, cause: ' + JSON.stringify(err)); console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
}); });
}); });
} catch (err) {
console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
}
``` ```
5. Start the device selection module to show the list of available devices on the network. 5. Start the device selection module to show the list of available devices on the network.
...@@ -231,12 +247,16 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -231,12 +247,16 @@ As the entry of the ability continuation capability, **continuationManager** is
continuationMode: continuationManager.ContinuationMode.COLLABORATION_SINGLE // Single-choice mode of the device selection module. continuationMode: continuationManager.ContinuationMode.COLLABORATION_SINGLE // Single-choice mode of the device selection module.
}; };
try {
// The token parameter is the token obtained during the registration. // The token parameter is the token obtained during the registration.
continuationManager.startDeviceManager(token, continuationExtraParams).then((data) => { continuationManager.startContinuationDeviceManager(token, continuationExtraParams).then((data) => {
console.info('startDeviceManager finished, ' + JSON.stringify(data)); console.info('startContinuationDeviceManager finished, ' + JSON.stringify(data));
}).catch((err) => { }).catch((err) => {
console.error('startDeviceManager failed, cause: ' + JSON.stringify(err)); console.error('startContinuationDeviceManager failed, cause: ' + JSON.stringify(err));
}); });
} catch (err) {
console.error('startContinuationDeviceManager failed, cause: ' + JSON.stringify(err));
}
``` ```
6. If you do not need to perform cross-device migration or collaboration operations, you can deregister the continuation management service, by passing the token obtained during the registration. 6. If you do not need to perform cross-device migration or collaboration operations, you can deregister the continuation management service, by passing the token obtained during the registration.
...@@ -244,10 +264,14 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -244,10 +264,14 @@ As the entry of the ability continuation capability, **continuationManager** is
The sample code is as follows: The sample code is as follows:
```ts ```ts
try {
// The token parameter is the token obtained during the registration. // The token parameter is the token obtained during the registration.
continuationManager.unregister(token).then((data) => { continuationManager.unregisterContinuation(token).then((data) => {
console.info('unregister finished, ' + JSON.stringify(data)); console.info('unregisterContinuation finished, ' + JSON.stringify(data));
}).catch((err) => { }).catch((err) => {
console.error('unregister failed, cause: ' + JSON.stringify(err)); console.error('unregisterContinuation failed, cause: ' + JSON.stringify(err));
}); });
} catch (err) {
console.error('unregisterContinuation failed, cause: ' + JSON.stringify(err));
}
``` ```
# Page Ability Development # Page Ability Development
## Overview ## Overview
### Concepts ### Concepts
The Page ability implements the ArkUI and provides the capability of interacting with developers. When you create an ability in DevEco Studio, DevEco Studio automatically creates template code. The capabilities related to the Page ability are implemented through the **featureAbility**, and the lifecycle callbacks are implemented through the callbacks in **app.js** or **app.ets**.
The Page ability implements the ArkUI and provides the capability of interacting with developers. When you create an ability in DevEco Studio, DevEco Studio automatically creates template code.
The capabilities related to the Page ability are implemented through the **featureAbility**, and the lifecycle callbacks are implemented through the callbacks in **app.js** or **app.ets**.
### Page Ability Lifecycle ### Page Ability Lifecycle
**Ability lifecycle** Introduction to the Page ability lifecycle:
The Page ability lifecycle defines all states of a Page ability, such as **INACTIVE**, **ACTIVE**, and **BACKGROUND**. The Page ability lifecycle defines all states of a Page ability, such as **INACTIVE**, **ACTIVE**, and **BACKGROUND**.
...@@ -27,27 +31,30 @@ Description of ability lifecycle states: ...@@ -27,27 +31,30 @@ Description of ability lifecycle states:
- **BACKGROUND**: The Page ability runs in the background. After being re-activated, the Page ability enters the **ACTIVE** state. After being destroyed, the Page ability enters the **INITIAL** state. - **BACKGROUND**: The Page ability runs in the background. After being re-activated, the Page ability enters the **ACTIVE** state. After being destroyed, the Page ability enters the **INITIAL** state.
**The following figure shows the relationship between lifecycle callbacks and lifecycle states of the Page ability.** The following figure shows the relationship between lifecycle callbacks and lifecycle states of the Page ability.
![fa-pageAbility-lifecycle](figures/fa-pageAbility-lifecycle.png) ![fa-pageAbility-lifecycle](figures/fa-pageAbility-lifecycle.png)
You can override the lifecycle callbacks provided by the Page ability in the **app.js** or **app.ets** file. Currently, the **app.js** file provides only the **onCreate** and **onDestroy** callbacks, and the **app.ets** file provides the full lifecycle callbacks. You can override the lifecycle callbacks provided by the Page ability in the **app.js** or **app.ets** file. Currently, the **app.js** file provides only the **onCreate** and **onDestroy** callbacks, and the **app.ets** file provides the full lifecycle callbacks.
### Launch Type ### Launch Type
The ability supports two launch types: singleton and multi-instance. The ability supports two launch types: singleton and multi-instance.
You can specify the launch type by setting **launchType** in the **config.json** file. You can specify the launch type by setting **launchType** in the **config.json** file.
**Table 1** Introduction to startup mode **Table 1** Startup modes
| Launch Type | Description |Description | | Launch Type | Description |Description |
| ----------- | ------- |---------------- | | ----------- | ------- |---------------- |
| standard | Multi-instance | A new instance is started each time an ability starts.| | standard | Multi-instance | A new instance is started each time an ability starts.|
| singleton | Singleton | Only one instance exists in the system. If an instance already exists when an ability is started, that instance is reused.| | singleton | Singleton | The ability has only one instance in the system. If an instance already exists when an ability is started, that instance is reused.|
By default, **singleton** is used. By default, **singleton** is used.
## Development Guidelines ## Development Guidelines
### Available APIs ### Available APIs
**Table 2** APIs provided by featureAbility **Table 2** APIs provided by featureAbility
...@@ -73,8 +80,7 @@ By default, **singleton** is used. ...@@ -73,8 +80,7 @@ By default, **singleton** is used.
```javascript ```javascript
import featureAbility from '@ohos.ability.featureAbility' import featureAbility from '@ohos.ability.featureAbility'
featureAbility.startAbility({ featureAbility.startAbility({
want: want: {
{
action: "", action: "",
entities: [""], entities: [""],
type: "", type: "",
...@@ -83,13 +89,12 @@ By default, **singleton** is used. ...@@ -83,13 +89,12 @@ By default, **singleton** is used.
/* In the FA model, abilityName consists of package and ability name. */ /* In the FA model, abilityName consists of package and ability name. */
abilityName: "com.example.entry.secondAbility", abilityName: "com.example.entry.secondAbility",
uri: "" uri: ""
}, }
}, });
);
``` ```
### Starting a Remote Page Ability ### Starting a Remote Page Ability
>Note >NOTE
> >
>This feature applies only to system applications, since the **getTrustedDeviceListSync** API of the **DeviceManager** class is open only to system applications. >This feature applies only to system applications, since the **getTrustedDeviceListSync** API of the **DeviceManager** class is open only to system applications.
......
...@@ -66,7 +66,7 @@ To learn more about the APIs for obtaining device location information, see [Geo ...@@ -66,7 +66,7 @@ To learn more about the APIs for obtaining device location information, see [Geo
If your application needs to access the device location information when running on the background, it must be configured to be able to run on the background and be granted the **ohos.permission.LOCATION_IN_BACKGROUND** permission. In this way, the system continues to report device location information after your application moves to the background. If your application needs to access the device location information when running on the background, it must be configured to be able to run on the background and be granted the **ohos.permission.LOCATION_IN_BACKGROUND** permission. In this way, the system continues to report device location information after your application moves to the background.
You can declare the required permission in your application's configuration file. For details, see [Application Package Structure Configuration File](../quick-start/stage-structure.md). You can declare the required permission in your application's configuration file. For details, see [Access Control (Permission) Development](../security/accesstoken-guidelines.md).
2. Import the **geolocation** module by which you can implement all APIs related to the basic location capabilities. 2. Import the **geolocation** module by which you can implement all APIs related to the basic location capabilities.
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
- [Video Playback Development](video-playback.md) - [Video Playback Development](video-playback.md)
- [Video Recording Development](video-recorder.md) - [Video Recording Development](video-recorder.md)
- Image - Image
- [Image Development](image.md) - [Image Development](image.md)
......
# Audio Capture Development # Audio Capture Development
## When to Use ## Introduction
You can use the APIs provided by **AudioCapturer** to record raw audio files. You can use the APIs provided by **AudioCapturer** to record raw audio files, thereby implementing audio data collection.
### State Check **Status check**: During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioCapturer** instance. This is because some operations can be performed only when the audio capturer is in a given state. If the application performs an operation when the audio capturer is not in the given state, the system may throw an exception or generate other undefined behavior.
During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioCapturer** instance. This is because some operations can be performed only when the audio capturer is in a given state. If the application performs an operation when the audio capturer is not in the given state, the system may throw an exception or generate other undefined behavior. ## Working Principles
For details about the APIs, see [AudioCapturer in Audio Management](../reference/apis/js-apis-audio.md#audiocapturer8). This following figure shows the audio capturer state transitions.
**Figure 1** Audio capturer state transitions
![audio-capturer-state](figures/audio-capturer-state.png)
- **PREPARED**: The audio capturer enters this state by calling **create()**.
- **RUNNING**: The audio capturer enters this state by calling **start()** when it is in the **PREPARED** state or by calling **start()** when it is in the **STOPPED** state.
- **STOPPED**: The audio capturer in the **RUNNING** state can call **stop()** to stop playing audio data.
- **RELEASED**: The audio capturer in the **PREPARED** or **STOPPED** state can use **release()** to release all occupied hardware and software resources. It will not transit to any other state after it enters the **RELEASED** state.
**Figure 1** Audio capturer state ## Constraints
![](figures/audio-capturer-state.png) Before developing the audio data collection feature, configure the **ohos.permission.MICROPHONE** permission for your application. For details about permission configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop ## How to Develop
For details about the APIs, see [AudioCapturer in Audio Management](../reference/apis/js-apis-audio.md#audiocapturer8).
1. Use **createAudioCapturer()** to create an **AudioCapturer** instance. 1. Use **createAudioCapturer()** to create an **AudioCapturer** instance.
Set parameters of the **AudioCapturer** instance in **audioCapturerOptions**. This instance is used to capture audio, control and obtain the recording status, and register a callback for notification. Set parameters of the **AudioCapturer** instance in **audioCapturerOptions**. This instance is used to capture audio, control and obtain the recording state, and register a callback for notification.
```js ```js
var audioStreamInfo = { import audio from '@ohos.multimedia.audio';
let audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100, samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1, channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
} }
var audioCapturerInfo = { let audioCapturerInfo = {
source: audio.SourceType.SOURCE_TYPE_MIC, source: audio.SourceType.SOURCE_TYPE_MIC,
capturerFlags: 1 capturerFlags: 0 // 0 is the extended flag bit of the audio capturer. The default value is 0.
} }
var audioCapturerOptions = { let audioCapturerOptions = {
streamInfo: audioStreamInfo, streamInfo: audioStreamInfo,
capturerInfo: audioCapturerInfo capturerInfo: audioCapturerInfo
} }
let audioCapturer = await audio.createAudioCapturer(audioCapturerOptions); let audioCapturer = await audio.createAudioCapturer(audioCapturerOptions);
var state = audioRenderer.state; console.log('AudioRecLog: Create audio capturer success.');
```
2. (Optional) Use **on('stateChange')** to subscribe to audio renderer state changes.
If an application needs to perform some operations when the audio renderer state is updated, the application can subscribe to the state changes. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
```js
audioCapturer.on('stateChange',(state) => {
console.info('AudioCapturerLog: Changed State to : ' + state)
switch (state) {
case audio.AudioState.STATE_PREPARED:
console.info('--------CHANGE IN AUDIO STATE----------PREPARED--------------');
console.info('Audio State is : Prepared');
break;
case audio.AudioState.STATE_RUNNING:
console.info('--------CHANGE IN AUDIO STATE----------RUNNING--------------');
console.info('Audio State is : Running');
break;
case audio.AudioState.STATE_STOPPED:
console.info('--------CHANGE IN AUDIO STATE----------STOPPED--------------');
console.info('Audio State is : stopped');
break;
case audio.AudioState.STATE_RELEASED:
console.info('--------CHANGE IN AUDIO STATE----------RELEASED--------------');
console.info('Audio State is : released');
break;
default:
console.info('--------CHANGE IN AUDIO STATE----------INVALID--------------');
console.info('Audio State is : invalid');
break;
}
});
``` ```
3. Use **start()** to start audio recording. 2. Use **start()** to start audio recording.
The capturer state will be **STATE_RUNNING** once the audio capturer is started. The application can then begin reading buffers. The capturer state will be **STATE_RUNNING** once the audio capturer is started. The application can then begin reading buffers.
```js ```js
import audio from '@ohos.multimedia.audio';
async function startCapturer() {
let state = audioCapturer.state;
// The audio capturer should be in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state after being started.
if (state != audio.AudioState.STATE_PREPARED || state != audio.AudioState.STATE_PAUSED ||
state != audio.AudioState.STATE_STOPPED) {
console.info('Capturer is not in a correct state to start');
return;
}
await audioCapturer.start(); await audioCapturer.start();
if (audioCapturer.state == audio.AudioState.STATE_RUNNING) {
let state = audioCapturer.state;
if (state == audio.AudioState.STATE_RUNNING) {
console.info('AudioRecLog: Capturer started'); console.info('AudioRecLog: Capturer started');
} else { } else {
console.info('AudioRecLog: Capturer start failed'); console.error('AudioRecLog: Capturer start failed');
}
} }
``` ```
4. Use **getBufferSize()** to obtain the minimum buffer size to read. 3. Read the captured audio data and convert it to a byte stream. Call **read()** repeatedly to read the data until the application stops the recording.
```js
var bufferSize = await audioCapturer.getBufferSize();
console.info('AudioRecLog: buffer size: ' + bufferSize);
```
5. Read the captured audio data and convert it to a byte stream. Call **read()** repeatedly to read the data until the application wants to stop the recording.
The following example shows how to write recorded data into a file. The following example shows how to write recorded data into a file.
```js ```js
import fileio from '@ohos.fileio'; import fileio from '@ohos.fileio';
const path = '/data/data/.pulse_dir/capture_js.wav'; let state = audioCapturer.state;
// The read operation can be performed only when the state is STATE_RUNNING.
if (state != audio.AudioState.STATE_RUNNING) {
console.info('Capturer is not in a correct state to read');
return;
}
const path = '/data/data/.pulse_dir/capture_js.wav'; // Path for storing the collected audio file.
let fd = fileio.openSync(path, 0o102, 0o777); let fd = fileio.openSync(path, 0o102, 0o777);
if (fd !== null) { if (fd !== null) {
console.info('AudioRecLog: file fd created'); console.info('AudioRecLog: file fd created');
...@@ -115,38 +110,140 @@ If an application needs to perform some operations when the audio renderer state ...@@ -115,38 +110,140 @@ If an application needs to perform some operations when the audio renderer state
console.info('AudioRecLog: file fd opened in append mode'); console.info('AudioRecLog: file fd opened in append mode');
} }
var numBuffersToCapture = 150; let numBuffersToCapture = 150; // Write data for 150 times.
while (numBuffersToCapture) { while (numBuffersToCapture) {
var buffer = await audioCapturer.read(bufferSize, true); let buffer = await audioCapturer.read(bufferSize, true);
if (typeof(buffer) == undefined) { if (typeof(buffer) == undefined) {
console.info('read buffer failed'); console.info('AudioRecLog: read buffer failed');
} else { } else {
var number = fileio.writeSync(fd, buffer); let number = fileio.writeSync(fd, buffer);
console.info('AudioRecLog: data written: ' + number); console.info(`AudioRecLog: data written: ${number}`);
} }
numBuffersToCapture--; numBuffersToCapture--;
} }
``` ```
6. Once the recording is complete, call **stop()** to stop the recording. 4. Once the recording is complete, call **stop()** to stop the recording.
```js
async function StopCapturer() {
let state = audioCapturer.state;
// The audio capturer can be stopped only when it is in STATE_RUNNING or STATE_PAUSED state.
if (state != audio.AudioState.STATE_RUNNING && state != audio.AudioState.STATE_PAUSED) {
console.info('AudioRecLog: Capturer is not running or paused');
return;
}
```
await audioCapturer.stop(); await audioCapturer.stop();
if (audioCapturer.state == audio.AudioState.STATE_STOPPED) {
state = audioCapturer.state;
if (state == audio.AudioState.STATE_STOPPED) {
console.info('AudioRecLog: Capturer stopped'); console.info('AudioRecLog: Capturer stopped');
} else { } else {
console.info('AudioRecLog: Capturer stop failed'); console.error('AudioRecLog: Capturer stop failed');
}
} }
``` ```
7. After the task is complete, call **release()** to release related resources. 5. After the task is complete, call **release()** to release related resources.
```js ```js
async function releaseCapturer() {
let state = audioCapturer.state;
// The audio capturer can be released only when it is not in the STATE_RELEASED or STATE_NEW state.
if (state == audio.AudioState.STATE_RELEASED || state == audio.AudioState.STATE_NEW) {
console.info('AudioRecLog: Capturer already released');
return;
}
await audioCapturer.release(); await audioCapturer.release();
if (audioCapturer.state == audio.AudioState.STATE_RELEASED) {
state = audioCapturer.state;
if (state == audio.AudioState.STATE_RELEASED) {
console.info('AudioRecLog: Capturer released'); console.info('AudioRecLog: Capturer released');
} else { } else {
console.info('AudioRecLog: Capturer release failed'); console.info('AudioRecLog: Capturer release failed');
} }
}
```
6. (Optional) Obtain the audio capturer information.
You can use the following code to obtain the audio capturer information:
```js
// Obtain the audio capturer state.
let state = audioCapturer.state;
// Obtain the audio capturer information.
let audioCapturerInfo : audio.AuduioCapturerInfo = await audioCapturer.getCapturerInfo();
// Obtain the audio stream information.
let audioStreamInfo : audio.AudioStreamInfo = await audioCapturer.getStreamInfo();
// Obtain the audio stream ID.
let audioStreamId : number = await audioCapturer.getAudioStreamId();
// Obtain the Unix timestamp, in nanoseconds.
let audioTime : number = await audioCapturer.getAudioTime();
// Obtain a proper minimum buffer size.
let bufferSize : number = await audioCapturer.getBuffersize();
```
7. (Optional) Use **on('markReach')** to subscribe to the mark reached event, and use **off('markReach')** to unsubscribe from the event.
After the mark reached event is subscribed to, when the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned.
```js
audioCapturer.on('markReach', (reachNumber) => {
console.info('Mark reach event Received');
console.info(`The Capturer reached frame: ${reachNumber}`);
});
audioCapturer.off('markReach'); // Unsubscribe from the mark reached event. This event will no longer be listened for.
```
8. (Optional) Use **on('periodReach')** to subscribe to the period reached event, and use **off('periodReach')** to unsubscribe from the event.
After the period reached event is subscribed to, each time the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned.
```js
audioCapturer.on('periodReach', (reachNumber) => {
console.info('Period reach event Received');
console.info(`In this period, the Capturer reached frame: ${reachNumber}`);
});
audioCapturer.off('periodReach'); // Unsubscribe from the period reached event. This event will no longer be listened for.
```
9. If your application needs to perform some operations when the audio capturer state is updated, it can subscribe to the state change event. When the audio capturer state is updated, the application receives a callback containing the event type.
```js
audioCapturer.on('stateChange', (state) => {
console.info(`AudioCapturerLog: Changed State to : ${state}`)
switch (state) {
case audio.AudioState.STATE_PREPARED:
console.info('--------CHANGE IN AUDIO STATE----------PREPARED--------------');
console.info('Audio State is : Prepared');
break;
case audio.AudioState.STATE_RUNNING:
console.info('--------CHANGE IN AUDIO STATE----------RUNNING--------------');
console.info('Audio State is : Running');
break;
case audio.AudioState.STATE_STOPPED:
console.info('--------CHANGE IN AUDIO STATE----------STOPPED--------------');
console.info('Audio State is : stopped');
break;
case audio.AudioState.STATE_RELEASED:
console.info('--------CHANGE IN AUDIO STATE----------RELEASED--------------');
console.info('Audio State is : released');
break;
default:
console.info('--------CHANGE IN AUDIO STATE----------INVALID--------------');
console.info('Audio State is : invalid');
break;
}
});
``` ```
# Audio Interruption Mode Development # Audio Interruption Mode Development
## When to Use ## Introduction
The audio interruption mode is used to control the playback of multiple audio streams.<br> The audio interruption mode is used to control the playback of multiple audio streams.
Audio applications can set the audio interruption mode to independent or shared under **AudioRenderer**.<br>
In shared mode, multiple audio streams share one session ID. In independent mode, each audio stream has an independent session ID. Audio applications can set the audio interruption mode to independent or shared under **AudioRenderer**.
### Asynchronous Operations In shared mode, multiple audio streams share one session ID. In independent mode, each audio stream has an independent session ID.
To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions. **Asynchronous operation**: To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions.
## How to Develop ## How to Develop
For details about the APIs, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8). For details about the APIs, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8).
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.<br> Set parameters of the **AudioRenderer** instance in **audioRendererOptions**.
Set parameters of the **AudioRenderer** instance in **audioRendererOptions**.<br>
This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.<br>
```js This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.
```js
import audio from '@ohos.multimedia.audio'; import audio from '@ohos.multimedia.audio';
var audioStreamInfo = { var audioStreamInfo = {
...@@ -39,7 +40,7 @@ For details about the APIs, see [AudioRenderer in Audio Management](../reference ...@@ -39,7 +40,7 @@ For details about the APIs, see [AudioRenderer in Audio Management](../reference
rendererInfo: audioRendererInfo rendererInfo: audioRendererInfo
} }
let audioRenderer = await audio.createAudioRenderer(audioRendererOptions); let audioRenderer = await audio.createAudioRenderer(audioRendererOptions);
``` ```
2. Set the audio interruption mode. 2. Set the audio interruption mode.
......
# Audio Playback Development # Audio Playback Development
## When to Use ## Introduction
You can use audio playback APIs to convert audio data into audible analog signals, play the signals using output devices, and manage playback tasks. You can use audio playback APIs to convert audio data into audible analog signals and play the signals using output devices. You can also manage playback tasks. For example, you can start, suspend, stop playback, release resources, set the volume, seek to a playback position, and obtain track information.
**Figure 1** Playback status ## Working Principles
The following figures show the audio playback status changes and the interaction with external modules for audio playback.
**Figure 1** Audio playback state transition
![en-us_image_audio_state_machine](figures/en-us_image_audio_state_machine.png) ![en-us_image_audio_state_machine](figures/en-us_image_audio_state_machine.png)
**Note**: If the status is **Idle**, setting the **src** attribute does not change the status. In addition, after the **src** attribute is set successfully, you must call **reset()** before setting it to another value. **NOTE**: If the status is **Idle**, setting the **src** attribute does not change the status. In addition, after the **src** attribute is set successfully, you must call **reset()** before setting it to another value.
**Figure 2** Layer 0 diagram of audio playback **Figure 2** Interaction with external modules for audio playback
![en-us_image_audio_player](figures/en-us_image_audio_player.png) ![en-us_image_audio_player](figures/en-us_image_audio_player.png)
**NOTE**: When a third-party application calls the JS interface provided by the JS interface layer to implement a feature, the framework layer invokes the audio component through the media service of the native framework and outputs the audio data decoded by the software to the audio HDI of the hardware interface layer to implement audio playback.
## How to Develop ## How to Develop
For details about the APIs, see [AudioPlayer in the Media API](../reference/apis/js-apis-media.md#audioplayer). For details about the APIs, see [AudioPlayer in the Media API](../reference/apis/js-apis-media.md#audioplayer).
> **NOTE**
>
> The method for obtaining the path in the FA model is different from that in the stage model. **pathDir** used in the sample code below is an example. You need to obtain the path based on project requirements. For details about how to obtain the path, see [Application Sandbox Path Guidelines](../reference/apis/js-apis-fileio.md#guidelines).
### Full-Process Scenario ### Full-Process Scenario
The full audio playback process includes creating an instance, setting the URI, playing audio, seeking to the playback position, setting the volume, pausing playback, obtaining track information, stopping playback, resetting the player, and releasing resources. The full audio playback process includes creating an instance, setting the URI, playing audio, seeking to the playback position, setting the volume, pausing playback, obtaining track information, stopping playback, resetting the player, and releasing resources.
...@@ -99,8 +109,9 @@ async function audioPlayerDemo() { ...@@ -99,8 +109,9 @@ async function audioPlayerDemo() {
setCallBack(audioPlayer); // Set the event callbacks. setCallBack(audioPlayer); // Set the event callbacks.
// 2. Set the URI of the audio file. // 2. Set the URI of the audio file.
let fdPath = 'fd://' let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command. let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3'; // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => { await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber; fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath); console.info('open fd success fd is' + fdPath);
...@@ -118,6 +129,7 @@ async function audioPlayerDemo() { ...@@ -118,6 +129,7 @@ async function audioPlayerDemo() {
```js ```js
import media from '@ohos.multimedia.media' import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio' import fileIO from '@ohos.fileio'
export class AudioDemo { export class AudioDemo {
// Set the player callbacks. // Set the player callbacks.
setCallBack(audioPlayer) { setCallBack(audioPlayer) {
...@@ -139,8 +151,9 @@ export class AudioDemo { ...@@ -139,8 +151,9 @@ export class AudioDemo {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance. let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks. this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://' let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command. let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3'; // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => { await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber; fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath); console.info('open fd success fd is' + fdPath);
...@@ -159,6 +172,7 @@ export class AudioDemo { ...@@ -159,6 +172,7 @@ export class AudioDemo {
```js ```js
import media from '@ohos.multimedia.media' import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio' import fileIO from '@ohos.fileio'
export class AudioDemo { export class AudioDemo {
// Set the player callbacks. // Set the player callbacks.
private isNextMusic = false; private isNextMusic = false;
...@@ -185,8 +199,9 @@ export class AudioDemo { ...@@ -185,8 +199,9 @@ export class AudioDemo {
async nextMusic(audioPlayer) { async nextMusic(audioPlayer) {
this.isNextMusic = true; this.isNextMusic = true;
let nextFdPath = 'fd://' let nextFdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\02.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command. let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
let nextpath = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/02.mp3'; // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\02.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let nextpath = pathDir + '/02.mp3'
await fileIO.open(nextpath).then((fdNumber) => { await fileIO.open(nextpath).then((fdNumber) => {
nextFdPath = nextFdPath + '' + fdNumber; nextFdPath = nextFdPath + '' + fdNumber;
console.info('open fd success fd is' + nextFdPath); console.info('open fd success fd is' + nextFdPath);
...@@ -202,8 +217,9 @@ export class AudioDemo { ...@@ -202,8 +217,9 @@ export class AudioDemo {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance. let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks. this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://' let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command. let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3'; // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => { await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber; fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath); console.info('open fd success fd is' + fdPath);
...@@ -222,6 +238,7 @@ export class AudioDemo { ...@@ -222,6 +238,7 @@ export class AudioDemo {
```js ```js
import media from '@ohos.multimedia.media' import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio' import fileIO from '@ohos.fileio'
export class AudioDemo { export class AudioDemo {
// Set the player callbacks. // Set the player callbacks.
setCallBack(audioPlayer) { setCallBack(audioPlayer) {
...@@ -239,8 +256,9 @@ export class AudioDemo { ...@@ -239,8 +256,9 @@ export class AudioDemo {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance. let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks. this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://' let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command. let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3'; // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => { await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber; fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath); console.info('open fd success fd is' + fdPath);
......
# Audio Recording Development # Audio Recording Development
## When to Use ## Introduction
During audio recording, audio signals are captured, encoded, and saved to files. You can specify parameters such as the sampling rate, number of audio channels, encoding format, encapsulation format, and file path for audio recording. During audio recording, audio signals are captured, encoded, and saved to files. You can specify parameters such as the sampling rate, number of audio channels, encoding format, encapsulation format, and output file path for audio recording.
## Working Principles
The following figures show the audio recording state transition and the interaction with external modules for audio recording.
**Figure 1** Audio recording state transition **Figure 1** Audio recording state transition
...@@ -10,10 +14,16 @@ During audio recording, audio signals are captured, encoded, and saved to files. ...@@ -10,10 +14,16 @@ During audio recording, audio signals are captured, encoded, and saved to files.
**Figure 2** Layer 0 diagram of audio recording **Figure 2** Interaction with external modules for audio recording
![en-us_image_audio_recorder_zero](figures/en-us_image_audio_recorder_zero.png) ![en-us_image_audio_recorder_zero](figures/en-us_image_audio_recorder_zero.png)
**NOTE**: When a third-party recording application or recorder calls the JS interface provided by the JS interface layer to implement a feature, the framework layer invokes the audio component through the media service of the native framework to obtain the audio data captured through the audio HDI. The framework layer then encodes the audio data through software and saves the encoded and encapsulated audio data to a file to implement audio recording.
## Constraints
Before developing audio recording, configure the **ohos.permission.MICROPHONE** permission for your application. For details about the configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop ## How to Develop
For details about the APIs, see [AudioRecorder in the Media API](../reference/apis/js-apis-media.md#audiorecorder). For details about the APIs, see [AudioRecorder in the Media API](../reference/apis/js-apis-media.md#audiorecorder).
......
# Audio Stream Management Development # Audio Stream Management Development
## When to Use ## Introduction
You can use **AudioStreamManager** to manage audio streams. You can use **AudioStreamManager** to manage audio streams.
### Development Process ## Working Principles
During application development, use **getStreamManager()** to create an **AudioStreamManager** instance. Then, you can call **on('audioRendererChange')** or **on('audioCapturerChange')** to listen for status, client, and audio attribute changes of the audio playback or recording application. To cancel the listening for these changes, call **off('audioRendererChange')** or **off('audioCapturerChange')**. You can call **getCurrentAudioRendererInfoArray()** to obtain information about the audio playback application, such as the unique audio stream ID, UID of the audio playback client, and audio status. Similarly, you can call **getCurrentAudioCapturerInfoArray()** to obtain information about the audio recording application. The figure below shows the invoking relationship. The following figure shows the calling relationship of **AudioStreamManager** APIs.
For details about the APIs, see [AudioStreamManager](../reference/apis/js-apis-audio.md#audiostreammanager9). **Figure 1** AudioStreamManager API calling relationship
**Figure 1** Audio stream management invoking relationship ![en-us_image_audio_stream_manager](figures/en-us_image_audio_stream_manager.png)
![](figures/en-us_image_audio_stream_manager.png) **NOTE**: During application development, use **getStreamManager()** to create an **AudioStreamManager** instance. Then, you can call **on('audioRendererChange')** or **on('audioCapturerChange')** to listen for status, client, and audio attribute changes of the audio playback or recording application. To cancel the listening for these changes, call **off('audioRendererChange')** or **off('audioCapturerChange')**. You can call **getCurrentAudioRendererInfoArray()** to obtain information about the audio playback application, such as the unique audio stream ID, UID of the audio playback client, and audio status. Similarly, you can call **getCurrentAudioCapturerInfoArray()** to obtain information about the audio recording application.
## How to Develop ## How to Develop
For details about the APIs, see [AudioStreamManager](../reference/apis/js-apis-audio.md#audiostreammanager9).
1. Create an **AudioStreamManager** instance. 1. Create an **AudioStreamManager** instance.
Before using **AudioStreamManager** APIs, you must use **getStreamManager()** to create an **AudioStreamManager** instance. Before using **AudioStreamManager** APIs, you must use **getStreamManager()** to create an **AudioStreamManager** instance.
```js ```js
var audioStreamManager = audio.getStreamManager(); var audioManager = audio.getAudioManager();
var audioStreamManager = audioManager.getStreamManager();
``` ```
2. (Optional) Call **on('audioRendererChange')** to listen for audio renderer changes. 2. (Optional) Call **on('audioRendererChange')** to listen for audio renderer changes.
If an application needs to receive notifications when the audio playback application status, audio playback client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
If an application needs to receive notifications when the audio playback application status, audio playback client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
```js ```js
audioStreamManager.on('audioRendererChange', (AudioRendererChangeInfoArray) => { audioStreamManager.on('audioRendererChange', (AudioRendererChangeInfoArray) => {
...@@ -61,7 +65,8 @@ If an application needs to receive notifications when the audio playback applica ...@@ -61,7 +65,8 @@ If an application needs to receive notifications when the audio playback applica
``` ```
4. (Optional) Call **on('audioCapturerChange')** to listen for audio capturer changes. 4. (Optional) Call **on('audioCapturerChange')** to listen for audio capturer changes.
If an application needs to receive notifications when the audio recording application status, audio recording client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
If an application needs to receive notifications when the audio recording application status, audio recording client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
```js ```js
audioStreamManager.on('audioCapturerChange', (AudioCapturerChangeInfoArray) => { audioStreamManager.on('audioCapturerChange', (AudioCapturerChangeInfoArray) => {
...@@ -94,7 +99,8 @@ If an application needs to receive notifications when the audio recording applic ...@@ -94,7 +99,8 @@ If an application needs to receive notifications when the audio recording applic
``` ```
6. (Optional) Call **getCurrentAudioRendererInfoArray()** to obtain information about the current audio renderer. 6. (Optional) Call **getCurrentAudioRendererInfoArray()** to obtain information about the current audio renderer.
This API can be used to obtain the unique ID of the audio stream, UID of the audio playback client, audio status, and other information about the audio player. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly.
This API can be used to obtain the unique ID of the audio stream, UID of the audio playback client, audio status, and other information about the audio player. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly.
```js ```js
await audioStreamManager.getCurrentAudioRendererInfoArray().then( function (AudioRendererChangeInfoArray) { await audioStreamManager.getCurrentAudioRendererInfoArray().then( function (AudioRendererChangeInfoArray) {
...@@ -127,7 +133,7 @@ This API can be used to obtain the unique ID of the audio stream, UID of the aud ...@@ -127,7 +133,7 @@ This API can be used to obtain the unique ID of the audio stream, UID of the aud
``` ```
7. (Optional) Call **getCurrentAudioCapturerInfoArray()** to obtain information about the current audio capturer. 7. (Optional) Call **getCurrentAudioCapturerInfoArray()** to obtain information about the current audio capturer.
This API can be used to obtain the unique ID of the audio stream, UID of the audio recording client, audio status, and other information about the audio capturer. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly. This API can be used to obtain the unique ID of the audio stream, UID of the audio recording client, audio status, and other information about the audio capturer. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly.
```js ```js
await audioStreamManager.getCurrentAudioCapturerInfoArray().then( function (AudioCapturerChangeInfoArray) { await audioStreamManager.getCurrentAudioCapturerInfoArray().then( function (AudioCapturerChangeInfoArray) {
......
此差异已折叠。
# OpenSL ES Audio Recording Development # OpenSL ES Audio Recording Development
## When to Use ## Introduction
You can use OpenSL ES to develop the audio recording function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned. You can use OpenSL ES to develop the audio recording function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned.
......
# OpenSL ES Audio Playback Development # OpenSL ES Audio Playback Development
## When to Use ## Introduction
You can use OpenSL ES to develop the audio playback function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned. You can use OpenSL ES to develop the audio playback function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned.
...@@ -58,7 +58,7 @@ To use OpenSL ES to develop the audio playback function in OpenHarmony, perform ...@@ -58,7 +58,7 @@ To use OpenSL ES to develop the audio playback function in OpenHarmony, perform
5. Obtain the **bufferQueueItf** instance of the **SL_IID_OH_BUFFERQUEUE** interface. 5. Obtain the **bufferQueueItf** instance of the **SL_IID_OH_BUFFERQUEUE** interface.
``` ```c++
SLOHBufferQueueItf bufferQueueItf; SLOHBufferQueueItf bufferQueueItf;
(*pcmPlayerObject)->GetInterface(pcmPlayerObject, SL_IID_OH_BUFFERQUEUE, &bufferQueueItf); (*pcmPlayerObject)->GetInterface(pcmPlayerObject, SL_IID_OH_BUFFERQUEUE, &bufferQueueItf);
``` ```
......
# Video Playback Development # Video Playback Development
## When to Use ## Introduction
You can use video playback APIs to convert video data into visible signals, play the signals using output devices, and manage playback tasks. This document describes development for the following video playback scenarios: full-process, normal playback, video switching, and loop playback. You can use video playback APIs to convert audio data into audible analog signals and play the signals using output devices. You can also manage playback tasks. For example, you can start, suspend, stop playback, release resources, set the volume, seek to a playback position, set the playback speed, and obtain track information. This document describes development for the following video playback scenarios: full-process, normal playback, video switching, and loop playback.
## Working Principles
The following figures show the video playback state transition and the interaction with external modules for video playback.
**Figure 1** Video playback state transition **Figure 1** Video playback state transition
![en-us_image_video_state_machine](figures/en-us_image_video_state_machine.png) ![en-us_image_video_state_machine](figures/en-us_image_video_state_machine.png)
**Figure 2** Layer 0 diagram of video playback **Figure 2** Interaction with external modules for video playback
![en-us_image_video_player](figures/en-us_image_video_player.png) ![en-us_image_video_player](figures/en-us_image_video_player.png)
**NOTE**: When a third-party application calls a JS interface provided by the JS interface layer, the framework layer invokes the audio component through the media service of the native framework to output the audio data decoded by the software to the audio HDI. The graphics subsystem outputs the image data decoded by the codec HDI at the hardware interface layer to the display HDI. In this way, video playback is implemented.
*Note: Video playback requires hardware capabilities such as display, audio, and codec.* *Note: Video playback requires hardware capabilities such as display, audio, and codec.*
1. A third-party application obtains a surface ID from the XComponent. 1. A third-party application obtains a surface ID from the XComponent.
......
# Video Recording Development # Video Recording Development
## When to Use ## Introduction
During video recording, audio and video signals are captured, encoded, and saved to files. You can specify parameters such as the encoding format, encapsulation format, and file path for video recording. You can use video recording APIs to capture audio and video signals, encode them, and save them to files. You can start, suspend, resume, and stop recording, and release resources. You can also specify parameters such as the encoding format, encapsulation format, and file path for video recording.
## Working Principles
The following figures show the video recording state transition and the interaction with external modules for video recording.
**Figure 1** Video recording state transition **Figure 1** Video recording state transition
![en-us_image_video_recorder_state_machine](figures/en-us_image_video_recorder_state_machine.png) ![en-us_image_video_recorder_state_machine](figures/en-us_image_video_recorder_state_machine.png)
**Figure 2** Layer 0 diagram of video recording **Figure 2** Interaction with external modules for video recording
![en-us_image_video_recorder_zero](figures/en-us_image_video_recorder_zero.png) ![en-us_image_video_recorder_zero](figures/en-us_image_video_recorder_zero.png)
**NOTE**: When a third-party camera application or system camera calls a JS interface provided by the JS interface layer, the framework layer uses the media service of the native framework to invoke the audio component. Through the audio HDI, the audio component captures audio data, encodes the audio data through software, and saves the encoded audio data to a file. The graphics subsystem captures image data through the video HDI, encodes the image data through the video codec HDI, and saves the encoded image data to a file. In this way, video recording is implemented.
## Constraints
Before developing video recording, configure the permissions **ohos.permission.MICROPHONE** and **ohos.permission.CAMERA** for your application. For details about the configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop ## How to Develop
For details about the APIs, see [VideoRecorder in the Media API](../reference/apis/js-apis-media.md#videorecorder9). For details about the APIs, see [VideoRecorder in the Media API](../reference/apis/js-apis-media.md#videorecorder9).
...@@ -147,3 +157,4 @@ export class VideoRecorderDemo { ...@@ -147,3 +157,4 @@ export class VideoRecorderDemo {
} }
} }
``` ```
...@@ -4,6 +4,4 @@ ...@@ -4,6 +4,4 @@
- [Drawing Development](drawing-guidelines.md) - [Drawing Development](drawing-guidelines.md)
- [Raw File Development](rawfile-guidelines.md) - [Raw File Development](rawfile-guidelines.md)
- [Native Window Development](native-window-guidelines.md) - [Native Window Development](native-window-guidelines.md)
- [Using MindSpore Lite for Model Inference](native-window-guidelines.md) - [Using MindSpore Lite for Model Inference](mindspore-lite-guidelines.md)
...@@ -125,6 +125,8 @@ ...@@ -125,6 +125,8 @@
- Security - Security
- [@ohos.abilityAccessCtrl](js-apis-abilityAccessCtrl.md) - [@ohos.abilityAccessCtrl](js-apis-abilityAccessCtrl.md)
- [@ohos.privacyManager](js-apis-privacyManager.md) - [@ohos.privacyManager](js-apis-privacyManager.md)
- [@ohos.security.cert](js-apis-cert.md)
- [@ohos.security.cryptoFramework]js-apis-cryptoFramework.md)
- [@ohos.security.huks ](js-apis-huks.md) - [@ohos.security.huks ](js-apis-huks.md)
- [@ohos.userIAM.faceAuth](js-apis-useriam-faceauth.md) - [@ohos.userIAM.faceAuth](js-apis-useriam-faceauth.md)
- [@ohos.userIAM.userAuth ](js-apis-useriam-userauth.md) - [@ohos.userIAM.userAuth ](js-apis-useriam-userauth.md)
...@@ -146,6 +148,8 @@ ...@@ -146,6 +148,8 @@
- [@ohos.document](js-apis-document.md) - [@ohos.document](js-apis-document.md)
- [@ohos.environment](js-apis-environment.md) - [@ohos.environment](js-apis-environment.md)
- [@ohos.data.fileAccess](js-apis-fileAccess.md)
- [@ohos.fileExtensionInfo](js-apis-fileExtensionInfo.md)
- [@ohos.fileio](js-apis-fileio.md) - [@ohos.fileio](js-apis-fileio.md)
- [@ohos.filemanagement.userfile_manager](js-apis-userfilemanager.md) - [@ohos.filemanagement.userfile_manager](js-apis-userfilemanager.md)
- [@ohos.multimedia.medialibrary](js-apis-medialibrary.md) - [@ohos.multimedia.medialibrary](js-apis-medialibrary.md)
...@@ -215,6 +219,7 @@ ...@@ -215,6 +219,7 @@
- [@ohos.geolocation](js-apis-geolocation.md) - [@ohos.geolocation](js-apis-geolocation.md)
- [@ohos.multimodalInput.inputConsumer](js-apis-inputconsumer.md) - [@ohos.multimodalInput.inputConsumer](js-apis-inputconsumer.md)
- [@ohos.multimodalInput.inputDevice](js-apis-inputdevice.md) - [@ohos.multimodalInput.inputDevice](js-apis-inputdevice.md)
- [@ohos.multimodalInput.inputDeviceCooperate](js-apis-cooperate.md)
- [@ohos.multimodalInput.inputEvent](js-apis-inputevent.md) - [@ohos.multimodalInput.inputEvent](js-apis-inputevent.md)
- [@ohos.multimodalInput.inputEventClient](js-apis-inputeventclient.md) - [@ohos.multimodalInput.inputEventClient](js-apis-inputeventclient.md)
- [@ohos.multimodalInput.inputMonitor](js-apis-inputmonitor.md) - [@ohos.multimodalInput.inputMonitor](js-apis-inputmonitor.md)
......
...@@ -24,7 +24,9 @@ SystemCapability.BundleManager.DistributedBundleFramework ...@@ -24,7 +24,9 @@ SystemCapability.BundleManager.DistributedBundleFramework
For details, see [Permission Levels](../../security/accesstoken-overview.md#permission-levels). For details, see [Permission Levels](../../security/accesstoken-overview.md#permission-levels).
## distributedBundle.getRemoteAbilityInfo ## distributedBundle.getRemoteAbilityInfo<sup>deprecated<sup>
> This API is deprecated since API version 9. You are advised to use [getRemoteAbilityInfo](js-apis-distributedBundle.md) instead.
getRemoteAbilityInfo(elementName: ElementName, callback: AsyncCallback&lt;RemoteAbilityInfo&gt;): void; getRemoteAbilityInfo(elementName: ElementName, callback: AsyncCallback&lt;RemoteAbilityInfo&gt;): void;
...@@ -51,7 +53,9 @@ This is a system API and cannot be called by third-party applications. ...@@ -51,7 +53,9 @@ This is a system API and cannot be called by third-party applications.
## distributedBundle.getRemoteAbilityInfo ## distributedBundle.getRemoteAbilityInfo<sup>deprecated<sup>
> This API is deprecated since API version 9. You are advised to use [getRemoteAbilityInfo](js-apis-distributedBundle.md) instead.
getRemoteAbilityInfo(elementName: ElementName): Promise&lt;RemoteAbilityInfo&gt; getRemoteAbilityInfo(elementName: ElementName): Promise&lt;RemoteAbilityInfo&gt;
...@@ -81,7 +85,9 @@ This is a system API and cannot be called by third-party applications. ...@@ -81,7 +85,9 @@ This is a system API and cannot be called by third-party applications.
| ------------------------------------------------------------ | --------------------------------- | | ------------------------------------------------------------ | --------------------------------- |
| Promise\<[RemoteAbilityInfo](js-apis-bundle-remoteAbilityInfo.md)> | Promise used to return the remote ability information.| | Promise\<[RemoteAbilityInfo](js-apis-bundle-remoteAbilityInfo.md)> | Promise used to return the remote ability information.|
## distributedBundle.getRemoteAbilityInfos ## distributedBundle.getRemoteAbilityInfos<sup>deprecated<sup>
> This API is deprecated since API version 9. You are advised to use [getRemoteAbilityInfo](js-apis-distributedBundle.md) instead.
getRemoteAbilityInfos(elementNames: Array&lt;ElementName&gt;, callback: AsyncCallback&lt;Array&lt;RemoteAbilityInfo&gt;&gt;): void; getRemoteAbilityInfos(elementNames: Array&lt;ElementName&gt;, callback: AsyncCallback&lt;Array&lt;RemoteAbilityInfo&gt;&gt;): void;
...@@ -104,11 +110,13 @@ This is a system API and cannot be called by third-party applications. ...@@ -104,11 +110,13 @@ This is a system API and cannot be called by third-party applications.
| Name | Type | Mandatory| Description | | Name | Type | Mandatory| Description |
| ------------ | ------------------------------------------------------------ | ---- | -------------------------------------------------- | | ------------ | ------------------------------------------------------------ | ---- | -------------------------------------------------- |
| elementNames | Array<[ElementName](js-apis-bundle-ElementName.md)> | Yes | **ElementName** array, whose maximum length is 10. | | elementNames | Array<[ElementName](js-apis-bundle-ElementName.md)> | Yes | **ElementName** array, whose maximum length is 10. |
| callback | AsyncCallback< Array<[RemoteAbilityInfo](js-apis-bundle-remoteAbilityInfo.md)>> | Yes | Callback used to return an array of the remote ability information.| | callback | AsyncCallback< Array<[RemoteAbilityInfo](js-apis-bundle-remoteAbilityInfo.md)>> | Yes | Callback used to return the remote ability information.|
## distributedBundle.getRemoteAbilityInfos<sup>deprecated<sup>
## distributedBundle.getRemoteAbilityInfos > This API is deprecated since API version 9. You are advised to use [getRemoteAbilityInfo](js-apis-distributedBundle.md) instead.
getRemoteAbilityInfos(elementNames: Array&lt;ElementName&gt;): Promise&lt;Array&lt;RemoteAbilityInfo&gt;&gt; getRemoteAbilityInfos(elementNames: Array&lt;ElementName&gt;): Promise&lt;Array&lt;RemoteAbilityInfo&gt;&gt;
......
# ShortcutInfo # ShortcutInfo<sup>(deprecated)<sup>
The **ShortcutInfo** module provides shortcut information defined in the configuration file. For details about the configuration in the FA model, see [config.json](../../quick-start/package-structure.md). For details about the configuration in the stage model, see [Internal Structure of the shortcuts Attribute](../../quick-start/stage-structure.md#internal-structure-of-the-shortcuts-attribute). The **ShortcutInfo** module provides shortcut information defined in the configuration file. For details about the configuration in the FA model, see [config.json](../../quick-start/package-structure.md). For details about the configuration in the stage model, see [Internal Structure of the shortcuts Attribute](../../quick-start/stage-structure.md#internal-structure-of-the-shortcuts-attribute).
> **NOTE** > **NOTE**
> >
> This module is deprecated since API version 9. You are advised to use [ShortcutInfo](js-apis-bundleManager-shortcutInfo.md) instead.
>
> The initial APIs of this module are supported since API version 7. Newly added APIs will be marked with a superscript to indicate their earliest API version. > The initial APIs of this module are supported since API version 7. Newly added APIs will be marked with a superscript to indicate their earliest API version.
## ShortcutWant ## ShortcutWant<sup>(deprecated)<sup>
Describes the information about the target to which the shortcut points. > This API is deprecated since API version 9. You are advised to use [ShortcutWant](js-apis-bundleManager-shortcutInfo.md) instead.
**System capability**: SystemCapability.BundleManager.BundleFramework **System capability**: SystemCapability.BundleManager.BundleFramework
**System API**: This is a system API and cannot be called by third-party applications. **System API**: This is a system API and cannot be called by third-party applications.
| Name | Type | Readable| Writable| Description | | Name | Type | Readable| Writable| Description |
| ------------------------- | ------ | ---- | ---- | -------------------- | | ------------------------- | ------ | ---- | ---- | -------------------- |
| targetBundle | string | Yes | No | Target bundle of the shortcut.| | targetBundle | string | Yes | No | Target bundle of the shortcut.|
| targetModule<sup>9+</sup> | string | Yes | No | Target module of the shortcut. |
| targetClass | string | Yes | No | Target class required by the shortcut.| | targetClass | string | Yes | No | Target class required by the shortcut.|
## ShortcutInfo ## ShortcutInfo<sup>(deprecated)<sup>
> This API is deprecated since API version 9. You are advised to use [ShortcutInfo](js-apis-bundleManager-shortcutInfo.md) instead.
Describes the shortcut attribute information.
**System capability**: SystemCapability.BundleManager.BundleFramework **System capability**: SystemCapability.BundleManager.BundleFramework
| Name | Type | Readable| Writable| Description | | Name | Type | Readable| Writable| Description |
| ----------------------- | ------------------------------------------ | ---- | ---- | ---------------------------- | | ----------------------- | ------------------------------------------ | ---- | ---- | ---------------------------- |
...@@ -40,4 +42,3 @@ Describes the shortcut attribute information. ...@@ -40,4 +42,3 @@ Describes the shortcut attribute information.
| isStatic | boolean | Yes | No | Whether the shortcut is static. | | isStatic | boolean | Yes | No | Whether the shortcut is static. |
| isHomeShortcut | boolean | Yes | No | Whether the shortcut is a home shortcut.| | isHomeShortcut | boolean | Yes | No | Whether the shortcut is a home shortcut.|
| isEnabled | boolean | Yes | No | Whether the shortcut is enabled. | | isEnabled | boolean | Yes | No | Whether the shortcut is enabled. |
| moduleName<sup>9+</sup> | string | Yes | No | Module name of the shortcut. |
...@@ -6,7 +6,9 @@ The **RemoteAbilityInfo** module provides information about a remote ability. ...@@ -6,7 +6,9 @@ The **RemoteAbilityInfo** module provides information about a remote ability.
> >
> The initial APIs of this module are supported since API version 8. Newly added APIs will be marked with a superscript to indicate their earliest API version. > The initial APIs of this module are supported since API version 8. Newly added APIs will be marked with a superscript to indicate their earliest API version.
## RemoteAbilityInfo ## RemoteAbilityInfo<sup>(deprecated)<sup>
> This API is deprecated since API version 9. You are advised to use [RemoteAbilityInfo](js-apis-bundleManager-remoteAbilityInfo.md) instead.
**System capability**: SystemCapability.BundleManager.DistributedBundleFramework **System capability**: SystemCapability.BundleManager.DistributedBundleFramework
......
# ElementName
The **ElementName** module provides the element name information, which can be obtained through [Context.getElementName](js-apis-Context.md).
> **NOTE**
>
> The initial APIs of this module are supported since API version 9. Newly added APIs will be marked with a superscript to indicate their earliest API version.
## ElementName
**System capability**: SystemCapability.BundleManager.BundleFramework.Core
| Name | Type | Readable| Writable| Description |
| ----------------------- | ---------| ---- | ---- | ------------------------- |
| deviceId | string | Yes | Yes | Device ID. |
| bundleName | string | Yes | Yes | Bundle name of the application. |
| abilityName | string | Yes | Yes | Name of the ability. |
| uri | string | Yes | Yes | Resource ID. |
| shortName | string | Yes | Yes | Short name of the ability. |
| moduleName | string | Yes | Yes | Module name of the HAP file to which the ability belongs. |
# RemoteAbilityInfo
The **RemoteAbilityInfo** module provides information about a remote ability, which can be obtained through [distributedBundle.getRemoteAbilityInfo](js-apis-distributedBundle.md).
> **NOTE**
>
> The initial APIs of this module are supported since API version 9. Newly added APIs will be marked with a superscript to indicate their earliest API version.
## RemoteAbilityInfo
**System capability**: SystemCapability.BundleManager.DistributedBundleFramework
**System API**: This is a system API.
| Name | Type | Readable| Writable| Description |
| ----------- | -------------------------------------------- | ---- | ---- | ----------------------- |
| elementName | [ElementName](js-apis-bundleManager-elementName.md) | Yes | No | Element name information of the remote ability. |
| label | string | Yes | No | Label of the remote ability. |
| icon | string | Yes | No | Icon of the remote ability.|
# ShortcutInfo
The **ShortcutInfo** module provides shortcut information defined in the configuration file. For details about the configuration in the FA model, see [config.json](../../quick-start/package-structure.md). For details about the configuration in the stage model, see [Internal Structure of the shortcuts Attribute](../../quick-start/stage-structure.md#internal-structure-of-the-shortcuts-attribute).
> **NOTE**
>
> The initial APIs of this module are supported since API version 9. Newly added APIs will be marked with a superscript to indicate their earliest API version.
## ShortcutWant
**System capability**: SystemCapability.BundleManager.BundleFramework.Launcher
**System API**: This is a system API.
| Name | Type | Readable| Writable| Description |
| ------------------------- | ------ | ---- | ---- | -------------------- |
| targetBundle | string | Yes | No | Target bundle name of the shortcut.|
| targetModule | string | Yes | No | Target module name of the shortcut. |
| targetAbility | string | Yes | No | Target ability name of the shortcut.|
## ShortcutInfo
**System capability**: SystemCapability.BundleManager.BundleFramework.Launcher
**System API**: This is a system API.
| Name | Type | Readable| Writable| Description |
| ----------------------- | ------------------------------------------ | ---- | ---- | ---------------------------- |
| id | string | Yes | No | ID of the application to which the shortcut belongs. |
| bundleName | string | Yes | No | Name of the bundle that contains the shortcut. |
| moduleName | string | Yes | No | Module name of the shortcut. |
| hostAbility | string | Yes | No | Local ability name of the shortcut. |
| icon | string | Yes | No | Icon of the shortcut. |
| iconId | number | Yes | No | Icon ID of the shortcut. |
| label | string | Yes | No | Label of the shortcut. |
| labelId | number | Yes | No | Label ID of the shortcut. |
| wants | Array\<[ShortcutWant](#shortcutwant)> | Yes | No | Want information required for the shortcut. |
...@@ -796,7 +796,7 @@ call.reject(rejectMessageOptions, (err, data) => { ...@@ -796,7 +796,7 @@ call.reject(rejectMessageOptions, (err, data) => {
## call.reject<sup>7+</sup> ## call.reject<sup>7+</sup>
reject(callId: number, callback: AsyncCallback<void\>): <void\> reject(callId: number, callback: AsyncCallback\<void>): void
Rejects a call based on the specified call ID. This API uses an asynchronous callback to return the result. Rejects a call based on the specified call ID. This API uses an asynchronous callback to return the result.
...@@ -811,16 +811,11 @@ This is a system API. ...@@ -811,16 +811,11 @@ This is a system API.
| callId | number | Yes | Call ID. You can obtain the value by subscribing to **callDetailsChange** events.| | callId | number | Yes | Call ID. You can obtain the value by subscribing to **callDetailsChange** events.|
| callback | AsyncCallback&lt;void&gt; | Yes | Callback used to return the result. | | callback | AsyncCallback&lt;void&gt; | Yes | Callback used to return the result. |
**Return value**
| Type | Description |
| ------------------- | --------------------------- |
| Promise&lt;void&gt; | Promise used to return the result.|
**Example** **Example**
```js ```js
call.reject(1, (error, data) => { call.reject(1, (err, data) => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`); console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`);
}); });
``` ```
...@@ -1621,8 +1616,8 @@ This is a system API. ...@@ -1621,8 +1616,8 @@ This is a system API.
**Example** **Example**
```js ```js
call.on('callDetailsChange', (err, data) => { call.on('callDetailsChange', data => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`); console.log(`callback: data->${JSON.stringify(data)}`);
}); });
``` ```
...@@ -1646,8 +1641,8 @@ This is a system API. ...@@ -1646,8 +1641,8 @@ This is a system API.
**Example** **Example**
```js ```js
call.on('callEventChange', (err, data) => { call.on('callEventChange', data => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`); console.log(`callback: data->${JSON.stringify(data)}`);
}); });
``` ```
...@@ -1671,8 +1666,8 @@ This is a system API. ...@@ -1671,8 +1666,8 @@ This is a system API.
**Example** **Example**
```js ```js
call.on('callDisconnectedCause', (err, data) => { call.on('callDisconnectedCause', data => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`); console.log(`callback: data->${JSON.stringify(data)}`);
}); });
``` ```
...@@ -1696,8 +1691,8 @@ This is a system API. ...@@ -1696,8 +1691,8 @@ This is a system API.
**Example** **Example**
```js ```js
call.on('mmiCodeResult', (err, data) => { call.on('mmiCodeResult', data => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`); console.log(`callback: data->${JSON.stringify(data)}`);
}); });
``` ```
...@@ -1721,8 +1716,8 @@ This is a system API. ...@@ -1721,8 +1716,8 @@ This is a system API.
**Example** **Example**
```js ```js
call.off('callDetailsChange', (err, data) => { call.off('callDetailsChange', data => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`); console.log(`callback: data->${JSON.stringify(data)}`);
}); });
``` ```
...@@ -1746,8 +1741,8 @@ This is a system API. ...@@ -1746,8 +1741,8 @@ This is a system API.
**Example** **Example**
```js ```js
call.off('callEventChange', (err, data) => { call.off('callEventChange', data => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`); console.log(`callback: data->${JSON.stringify(data)}`);
}); });
``` ```
...@@ -1771,8 +1766,8 @@ This is a system API. ...@@ -1771,8 +1766,8 @@ This is a system API.
**Example** **Example**
```js ```js
call.off('callDisconnectedCause', (err, data) => { call.off('callDisconnectedCause', data => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`); console.log(`callback: data->${JSON.stringify(data)}`);
}); });
``` ```
...@@ -1796,8 +1791,8 @@ This is a system API. ...@@ -1796,8 +1791,8 @@ This is a system API.
**Example** **Example**
```js ```js
call.off('mmiCodeResult', (err, data) => { call.off('mmiCodeResult', data => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`); console.log(`callback: data->${JSON.stringify(data)}`);
}); });
``` ```
...@@ -2386,7 +2381,7 @@ This is a system API. ...@@ -2386,7 +2381,7 @@ This is a system API.
let audioDeviceOptions={ let audioDeviceOptions={
bluetoothAddress: "IEEE 802-2014" bluetoothAddress: "IEEE 802-2014"
} }
call.setAudioDevice(1, audioDeviceOptions, (err, value) => { call.setAudioDevice(1, audioDeviceOptions, (err, data) => {
console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`); console.log(`callback: err->${JSON.stringify(err)}, data->${JSON.stringify(data)}`);
}); });
``` ```
......
此差异已折叠。
# Configuration Policy # Configuration Policy
The configuration policy provides the capability of obtaining the custom configuration directory and file path based on the predefined custom configuration level. The **configPolicy** module provides APIs for obtaining the custom configuration directory and file path based on the predefined custom configuration level.
> **NOTE** > **NOTE**
> >
> The initial APIs of this module are supported since API version 8. Newly added APIs will be marked with a superscript to indicate their earliest API version. > The initial APIs of this module are supported since API version 8. Newly added APIs will be marked with a superscript to indicate their earliest API version.
> >
> The APIs of this module are system APIs and cannot be called by third-party applications. > The APIs provided by this module are system APIs.
## Modules to Import ## Modules to Import
...@@ -32,7 +32,7 @@ For example, if the **config.xml** file is stored in **/system/etc/config.xml** ...@@ -32,7 +32,7 @@ For example, if the **config.xml** file is stored in **/system/etc/config.xml**
**Example** **Example**
```js ```js
configPolicy.getOneCfgFile('etc/config.xml', (error, value) => { configPolicy.getOneCfgFile('etc/config.xml', (error, value) => {
if (error == undefined) { if (error == null) {
console.log("value is " + value); console.log("value is " + value);
} else { } else {
console.log("error occurs "+ error); console.log("error occurs "+ error);
...@@ -73,7 +73,8 @@ Obtains the path of a configuration file with the specified name and highest pri ...@@ -73,7 +73,8 @@ Obtains the path of a configuration file with the specified name and highest pri
getCfgFiles(relPath: string, callback: AsyncCallback&lt;Array&lt;string&gt;&gt;) getCfgFiles(relPath: string, callback: AsyncCallback&lt;Array&lt;string&gt;&gt;)
Obtains all configuration files with the specified name and lists them in ascending order of priority. This API uses an asynchronous callback to return the result. For example, if the **config.xml** file is stored in **/system/etc/config.xml** and **/sys_pod/etc/config.xml**, then **/system/etc/config.xml, /sys_pod/etc/config.xml** is returned. Obtains a list of configuration files with the specified name, sorted in ascending order of priority. This API uses an asynchronous callback to return the result.
For example, if the **config.xml** file is stored in **/system/etc/config.xml** and **/sys_pod/etc/config.xml** (in ascending order of priority), then **/system/etc/config.xml, /sys_pod/etc/config.xml** is returned.
**System capability**: SystemCapability.Customization.ConfigPolicy **System capability**: SystemCapability.Customization.ConfigPolicy
...@@ -86,7 +87,7 @@ Obtains all configuration files with the specified name and lists them in ascend ...@@ -86,7 +87,7 @@ Obtains all configuration files with the specified name and lists them in ascend
**Example** **Example**
```js ```js
configPolicy.getCfgFiles('etc/config.xml', (error, value) => { configPolicy.getCfgFiles('etc/config.xml', (error, value) => {
if (error == undefined) { if (error == null) {
console.log("value is " + value); console.log("value is " + value);
} else { } else {
console.log("error occurs "+ error); console.log("error occurs "+ error);
...@@ -99,7 +100,7 @@ Obtains all configuration files with the specified name and lists them in ascend ...@@ -99,7 +100,7 @@ Obtains all configuration files with the specified name and lists them in ascend
getCfgFiles(relPath: string): Promise&lt;Array&lt;string&gt;&gt; getCfgFiles(relPath: string): Promise&lt;Array&lt;string&gt;&gt;
Obtains all configuration files with the specified name and lists them in ascending order of priority. This API uses a promise to return the result. Obtains a list of configuration files with the specified name, sorted in ascending order of priority. This API uses a promise to return the result.
**System capability**: SystemCapability.Customization.ConfigPolicy **System capability**: SystemCapability.Customization.ConfigPolicy
...@@ -127,7 +128,7 @@ Obtains all configuration files with the specified name and lists them in ascend ...@@ -127,7 +128,7 @@ Obtains all configuration files with the specified name and lists them in ascend
getCfgDirList(callback: AsyncCallback&lt;Array&lt;string&gt;&gt;) getCfgDirList(callback: AsyncCallback&lt;Array&lt;string&gt;&gt;)
Obtains the configuration level directory list. This API uses an asynchronous callback to return the result. Obtains the list of configuration level directories. This API uses an asynchronous callback to return the result.
**System capability**: SystemCapability.Customization.ConfigPolicy **System capability**: SystemCapability.Customization.ConfigPolicy
...@@ -139,7 +140,7 @@ Obtains the configuration level directory list. This API uses an asynchronous ca ...@@ -139,7 +140,7 @@ Obtains the configuration level directory list. This API uses an asynchronous ca
**Example** **Example**
```js ```js
configPolicy.getCfgDirList((error, value) => { configPolicy.getCfgDirList((error, value) => {
if (error == undefined) { if (error == null) {
console.log("value is " + value); console.log("value is " + value);
} else { } else {
console.log("error occurs "+ error); console.log("error occurs "+ error);
...@@ -152,7 +153,7 @@ Obtains the configuration level directory list. This API uses an asynchronous ca ...@@ -152,7 +153,7 @@ Obtains the configuration level directory list. This API uses an asynchronous ca
getCfgDirList(): Promise&lt;Array&lt;string&gt;&gt; getCfgDirList(): Promise&lt;Array&lt;string&gt;&gt;
Obtains the configuration level directory list. This API uses a promise to return the result. Obtains the list of configuration level directories. This API uses a promise to return the result.
**System capability**: SystemCapability.Customization.ConfigPolicy **System capability**: SystemCapability.Customization.ConfigPolicy
......
# Screen Hopping
The Screen Hopping module enables two or more networked devices to share the keyboard and mouse for collaborative operations.
> **Description**
>
> The initial APIs of this module are supported since API version 9. Newly added APIs will be marked with a superscript to indicate their earliest API version.
## Modules to Import
```js
import inputDeviceCooperate from '@ohos.multimodalInput.inputDeviceCooperate'
```
## inputDeviceCooperate.enable
enable(enable: boolean, callback: AsyncCallback&lt;void&gt;): void
Specifies whether to enable screen hopping. This API uses an asynchronous callback to return the result.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory | Description |
| -------- | ------------------------- | ---- | --------------------------- |
| enable | boolean | Yes | Whether to enable screen hopping.|
| callback | AsyncCallback&lt;void&gt; | Yes |Callback used to return the result. |
**Example**
```js
try {
inputDeviceCooperate.enable(true, (error) => {
if (error) {
console.log(`Keyboard mouse crossing enable failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
return;
}
console.log(`Keyboard mouse crossing enable success.`);
});
} catch (error) {
console.log(`Keyboard mouse crossing enable failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## inputDeviceCooperate.enable
enable(enable: boolean): Promise&lt;void&gt;
Specifies whether to enable screen hopping. This API uses a promise to return the result.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory | Description |
| --------- | ------- | ---- | ------------------------------------------------------------------- |
| enable | boolean | Yes | Whether to enable screen hopping. |
**Return value**
| Name | Description |
| ------------------- | ------------------------------- |
| Promise&lt;void&gt; | Promise used to return the result. |
**Example**
```js
try {
inputDeviceCooperate.enable(true).then(() => {
console.log(`Keyboard mouse crossing enable success.`);
}, (error) => {
console.log(`Keyboard mouse crossing enable failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
});
} catch (error) {
console.log(`Keyboard mouse crossing enable failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## inputDeviceCooperate.start
start(sinkDeviceDescriptor: string, srcInputDeviceId: number, callback: AsyncCallback\<void>): void
Starts screen hopping. This API uses an asynchronous callback to return the result.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory | Description |
| -------- | ---------------------------- | ---- | ---------------------------- |
| sinkDeviceDescriptor | string | Yes | Descriptor of the target device for screen hopping. |
| srcInputDeviceId | number | Yes | ID of the target device for screen hopping. |
| callback | AsyncCallback\<void> | Yes | Callback used to return the result.|
**Error codes**
For details about the error codes, see [Screen Hopping Error Codes](../errorcodes/errorcodes-multimodalinput.md).
| ID| Error Message|
| -------- | ---------------------------------------- |
| 4400001 | This error code is reported if an invalid device descriptor is passed to the screen hopping API. |
| 4400002 | This error code is reported if the screen hopping status is abnormal when the screen hopping API is called. |
**Example**
```js
try {
inputDeviceCooperate.start(sinkDeviceDescriptor, srcInputDeviceId, (error) => {
if (error) {
console.log(`Start Keyboard mouse crossing failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
return;
}
console.log(`Start Keyboard mouse crossing success.`);
});
} catch (error) {
console.log(`Start Keyboard mouse crossing failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## inputDeviceCooperate.start
start(sinkDeviceDescriptor: string, srcInputDeviceId: number): Promise\<void>
Starts screen hopping. This API uses a promise to return the result.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory | Description |
| -------- | ---------------------------- | ---- | ---------------------------- |
| sinkDeviceDescriptor | string | Yes | Descriptor of the target device for screen hopping. |
| srcInputDeviceId | number | Yes | ID of the target device for screen hopping. |
**Return value**
| Name | Description |
| ---------------------- | ------------------------------- |
| Promise\<void> | Promise used to return the result. |
**Error codes**
For details about the error codes, see [Screen Hopping Error Codes](../errorcodes/errorcodes-multimodalinput.md).
| ID| Error Message|
| -------- | ---------------------------------------- |
| 4400001 | This error code is reported if an invalid device descriptor is passed to the screen hopping API. |
| 4400002 | This error code is reported if the screen hopping status is abnormal when the screen hopping API is called. |
**Example**
```js
try {
inputDeviceCooperate.start(sinkDeviceDescriptor, srcInputDeviceId).then(() => {
console.log(`Start Keyboard mouse crossing success.`);
}, (error) => {
console.log(`Start Keyboard mouse crossing failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
});
} catch (error) {
console.log(`Start Keyboard mouse crossing failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## inputDeviceCooperate.stop
stop(callback: AsyncCallback\<void>): void
Stops screen hopping. This API uses an asynchronous callback to return the result.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory | Description |
| -------- | ---------------------------- | ---- | ---------------------------- |
| callback | AsyncCallback\<void> | Yes | Callback used to return the result. |
**Example**
```js
try {
inputDeviceCooperate.stop((error) => {
if (error) {
console.log(`Stop Keyboard mouse crossing failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
return;
}
console.log(`Stop Keyboard mouse crossing success.`);
});
} catch (error) {
console.log(`Stop Keyboard mouse crossing failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## inputDeviceCooperate.stop
stop(): Promise\<void>
Stops screen hopping. This API uses a promise to return the result.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Description |
| -------- | ---------------------------- |
| Promise\<void> | Promise used to return the result. |
**Example**
```js
try {
inputDeviceCooperate.stop().then(() => {
console.log(`Stop Keyboard mouse crossing success.`);
}, (error) => {
console.log(`Stop Keyboard mouse crossing failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
});
} catch (error) {
console.log(`Stop Keyboard mouse crossing failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## inputDeviceCooperate.getState
getState(deviceDescriptor: string, callback: AsyncCallback<{ state: boolean }>): void
Checks whether screen hopping is enabled. This API uses an asynchronous callback to return the result.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory | Description |
| -------- | --------- | ---- | ---------------------------- |
| deviceDescriptor | string | Yes | Descriptor of the target device for screen hopping. |
| callback | AsyncCallback<{ state: boolean }> | Yes | Callback used to return the result. |
**Example**
```js
try {
inputDeviceCooperate.getState(deviceDescriptor, (error, data) => {
if (error) {
console.log(`Get the status failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
return;
}
console.log(`Get the status success, data: ${JSON.stringify(data)}`);
});
} catch (error) {
console.log(`Get the status failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## inputDeviceCooperate.getState
getState(deviceDescriptor: string): Promise<{ state: boolean }>
Checks whether screen hopping is enabled. This API uses a promise to return the result.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory | Description |
| -------- | --------- | ---- | ---------------------------- |
| deviceDescriptor | string | Yes | Descriptor of the target device for screen hopping. |
**Return value**
| Name | Description |
| ------------------- | ------------------------------- |
| Promise<{ state: boolean }>| Promise used to return the result. |
**Example**
```js
try {
inputDeviceCooperate.getState(deviceDescriptor).then((data) => {
console.log(`Get the status success, data: ${JSON.stringify(data)}`);
}, (error) => {
console.log(`Get the status failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
});
} catch (error) {
console.log(`Get the status failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## on('cooperation')
on(type: 'cooperation', callback: AsyncCallback<{ deviceDescriptor: string, eventMsg: EventMsg }>): void
Enables listening for screen hopping events.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory| Description |
| -------- | ---------------------------- | ---- | ---------------------------- |
| type | string | Yes | Event type. The value is **cooperation**. |
| callback | AsyncCallback<{ deviceDescriptor: string, eventMsg: [EventMsg](#eventmsg) }> | Yes | Callback used to return the result. |
**Example**
```js
try {
inputDeviceCooperate.on('cooperation', (data) => {
console.log(`Keyboard mouse crossing event: ${JSON.stringify(data)}`);
});
} catch (err) {
console.log(`Register failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## off('cooperation')
off(type: 'cooperation', callback?: AsyncCallback\<void>): void
Disables listening for screen hopping events.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
**Parameters**
| Name | Type | Mandatory | Description |
| -------- | ---------------------------- | ---- | ---------------------------- |
| type | string | Yes | Event type. The value is **cooperation**. |
| callback | AsyncCallback<void> | No | Callback to be unregistered. If this parameter is not specified, all callbacks registered by the current application will be unregistered.|
**Example**
```js
// Unregister a single callback.
callback: function(event) {
console.log(`Keyboard mouse crossing event: ${JSON.stringify(event)}`);
return false;
},
try {
inputDeviceCooperate.on('cooperation', this.callback);
inputDeviceCooperate.off("cooperation", this.callback);
} catch (error) {
console.log(`Execute failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
```js
// Unregister all callbacks.
callback: function(event) {
console.log(`Keyboard mouse crossing event: ${JSON.stringify(event)}`);
return false;
},
try {
inputDeviceCooperate.on('cooperation', this.callback);
inputDeviceCooperate.off("cooperation");
} catch (error) {
console.log(`Execute failed, error: ${JSON.stringify(error, [`code`, `message`])}`);
}
```
## EventMsg
Enumerates screen hopping event.
**System capability**: SystemCapability.MultimodalInput.Input.InputDeviceCooperate
| Name | Value | Description |
| -------- | --------- | ----------------- |
| MSG_COOPERATE_INFO_START | 200 | Screen hopping starts. |
| MSG_COOPERATE_INFO_SUCCESS | 201 | Screen hopping succeeds. |
| MSG_COOPERATE_INFO_FAIL | 202 | Screen hopping fails. |
| MSG_COOPERATE_STATE_ON | 500 | Screen hopping is enabled. |
| MSG_COOPERATE_STATE_OFF | 501 | Screen hopping is disabled. |
此差异已折叠。
# User File Extension Info
The **fileExtensionInfo** module defines attributes in **RootInfo** and **FileInfo** of the user file access and management module.
>**NOTE**
>
>- The initial APIs of this module are supported since API version 9. Newly added APIs will be marked with a superscript to indicate their earliest API version.
>- The APIs provided by this module are system APIs.
## Modules to Import
```js
import fileExtensionInfo from '@ohos.fileExtensionInfo';
```
## fileExtensionInfo.DeviceType
Defines the values of **deviceType** used in **RootInfo**.
**System capability**: SystemCapability.FileManagement.UserFileService
| Name| Value| Description|
| ----- | ------ | ------ |
| DEVICE_LOCAL_DISK | 1 | Local disk.|
| DEVICE_SHARED_DISK | 2 | Shared disk.|
| DEVICE_SHARED_TERMINAL | 3 | Distributed network device.|
| DEVICE_NETWORK_NEIGHBORHOODS | 4 | Network neighbor device.|
| DEVICE_EXTERNAL_MTP | 5 | MTP device.|
| DEVICE_EXTERNAL_USB | 6 | USB device.|
| DEVICE_EXTERNAL_CLOUD | 7 | Cloud disk.|
## fileExtensionInfo.DeviceFlag
Defines the values of **deviceFlags** used in **RootInfo**. **deviceFlags** is used to determine whether a capability is available through the AND operation.
**System capability**: SystemCapability.FileManagement.UserFileService
### Attributes
| Name| Type | Value| Readable| Writable| Description |
| ------ | ------ | ---- | ---- | ---- | -------- |
| SUPPORTS_READ | number | 0b1 | Yes | No | Read support.|
| SUPPORTS_WRITE | number | 0b10 | Yes | No | Write support.|
## fileExtensionInfo.DocumentFlag
Defines the values of **mode** used in **FileInfo**.
**System capability**: SystemCapability.FileManagement.UserFileService
### Attributes
| Name| Type | Value| Readable| Writable| Description |
| ------ | ------ | ---- | ---- | ---- | -------- |
| REPRESENTS_FILE | number | 0b1 | Yes | No | File.|
| REPRESENTS_DIR | number | 0b10 | Yes | No | Directory.|
| SUPPORTS_READ | number | 0b100 | Yes | No | Read support.|
| SUPPORTS_WRITE | number | 0b1000 | Yes | No | Write support.|
...@@ -85,7 +85,7 @@ struct PopupExample { ...@@ -85,7 +85,7 @@ struct PopupExample {
} }
}, },
onStateChange: (e) => { onStateChange: (e) => {
console.info(e.isVisible.toString()) console.info(JSON.stringify(e.isVisible))
if (!e.isVisible) { if (!e.isVisible) {
this.handlePopup = false this.handlePopup = false
} }
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册