提交 d7b69096 编写于 作者: F FangJinliang 提交者: fangJinliang1

Merge branch 'monthly_20221018' of gitee.com:openharmony/docs into monthly_20221018

Signed-off-by: NFangJinliang <fangjinliang1@huawei.com>
Signed-off-by: NfangJinliang1 <fangjinliang1@huawei.com>
Change-Id: Ib4c72fc871a50d40c84d5bce63c0c6fa95522bd6
......@@ -18,7 +18,7 @@ This repository stores device and application development documents provided by
- master: the latest version.
- OpenHarmony 3.2 Beta3. [Learn more](en/release-notes/OpenHarmony-v3.2-beta3.md)
- OpenHarmony 3.2 Beta5. [Learn more](en/release-notes/OpenHarmony-v3.2-beta5.md)
- OpenHarmony 3.1 Release. [Learn more](en/release-notes/OpenHarmony-v3.1-release.md)
......@@ -34,7 +34,7 @@ This repository stores device and application development documents provided by
### Historical Stable Versions
OpenHarmony_v1.x_release: OpenHarmony v1.1.5 LTS. [Learn more](en/release-notes/OpenHarmony-v1.1.5-LTS.md)
OpenHarmony_v1.x_release: OpenHarmony 1.1.5 LTS. [Learn more](en/release-notes/OpenHarmony-v1.1.5-LTS.md)
[More versions](en/release-notes/)
......@@ -51,6 +51,6 @@ You can evaluate available documents, make simple modifications, provide feedbac
Excellent contributors will be awarded and the contributions will be publicized in the developer community.
- Mail list: docs@openharmony.io
- Mailing list: docs@openharmony.io
- Zulip group: documentation_sig
\ No newline at end of file
......@@ -70,7 +70,7 @@
- [DevEco Studio (OpenHarmony) User Guide](quick-start/deveco-studio-user-guide-for-openharmony.md)
- [Debugging Tools](tools/Readme-EN.md)
- Hands-On Tutorials
- [Samples](https://gitee.com/openharmony/applications_app_samples/blob/master/README.md)
- [Samples](https://gitee.com/openharmony/applications_app_samples/blob/monthly_20221018/README.md)
- [Codelabs](https://gitee.com/openharmony/codelabs)
- API References
- [SystemCapability](reference/syscap.md)
......
......@@ -48,7 +48,7 @@ DevEco Studio is a high-performance integrated development environment (IDE) rec
### Hands-On Tutorials
To make you better understand how functions work together and jumpstart your application development projects, we provide stripped-down, real-world [samples](https://gitee.com/openharmony/applications_app_samples/blob/master/README.md) and [codelabs](https://gitee.com/openharmony/codelabs).
To make you better understand how functions work together and jumpstart your application development projects, we provide stripped-down, real-world [samples](https://gitee.com/openharmony/applications_app_samples/blob/monthly_20221018/README.md) and [codelabs](https://gitee.com/openharmony/codelabs).
### API References
......
......@@ -24,7 +24,7 @@ First thing first, familiarize yourself with the two cornerstone frameworks in O
All applications should be developed on top of these frameworks.
Then, equip yourself for developing the key features, with the following guidelines:
- [Common Event and Notification](notification/notification-brief.md)
- [Common Event and Notification](notification/notification-overview.md)
- [Window Manager](windowmanager/window-overview.md)
- [WebGL](webgl/webgl-overview.md)
- [Media](media/audio-overview.md)
......@@ -48,7 +48,7 @@ DevEco Studio is a high-performance integrated development environment (IDE) rec
### Hands-On Tutorials
To make you better understand how functions work together and jumpstart your application development projects, we provide stripped-down, real-world [samples](https://gitee.com/openharmony/applications_app_samples/blob/master/README.md) and [codelabs](https://gitee.com/openharmony/codelabs).
To make you better understand how functions work together and jumpstart your application development projects, we provide stripped-down, real-world [samples](https://gitee.com/openharmony/applications_app_samples/blob/monthly_20221018/README.md) and [codelabs](https://gitee.com/openharmony/codelabs).
### API References
......
......@@ -17,8 +17,10 @@
- ExtensionAbility Component
- [ExtensionAbility Component Overview](extensionability-overview.md)
- [ServiceExtensionAbility](serviceextensionability.md)
- [DataShareExtensionAbility](datashareextensionability.md)
- [DataShareExtensionAbility (System Applications Only)](datashareextensionability.md)
- [FormExtensionAbility (Widget)](widget-development-stage.md)
- [AccessibilityExtensionAbility](accessibilityextensionability.md)
- [WindowExtensionAbility](windowextensionability.md)
- [AbilityStage Component Container](abilitystage.md)
- [Context](application-context-stage.md)
- Want
......@@ -31,8 +33,8 @@
- [Component Startup Rules](component-startup-rules.md)
- Inter-Device Application Component Interaction (Continuation)
- [Continuation Overview](inter-device-interaction-hop-overview.md)
- [Cross-Device Migration](hop-cross-device-migration.md)
- [Multi-device Collaboration](hop-multi-device-collaboration.md)
- [Cross-Device Migration (System Applications Only)](hop-cross-device-migration.md)
- [Multi-device Collaboration (System Applications Only)](hop-multi-device-collaboration.md)
- IPC
- [Process Model](process-model-stage.md)
- Common Events
......@@ -49,7 +51,6 @@
- [Mission Management Scenarios](mission-management-overview.md)
- [Mission Management and Launch Type](mission-management-launch-type.md)
- [Page Stack and MissionList](page-mission-stack.md)
- [Application Configuration File](config-file-stage.md)
- FA Model Development
- [FA Model Development Overview](fa-model-development-overview.md)
- FA Mode Application Components
......@@ -62,7 +63,7 @@
- [Creating a PageAbility](create-pageability.md)
- [Starting a Local PageAbility](start-local-pageability.md)
- [Stopping a PageAbility](stop-pageability.md)
- [Starting a Remote PageAbility](start-remote-pageability.md)
- [Starting a Remote PageAbility (System Applications Only)](start-remote-pageability.md)
- [Starting a Specified Page](start-page.md)
- [Window Properties](window-properties.md)
- [Requesting Permissions](request-permissions.md)
......@@ -94,7 +95,6 @@
- [Thread Model](thread-model-fa.md)
- [Inter-Thread Communication](itc-fa-overview.md)
- [Mission Management](mission-management-fa.md)
- [Application Configuration File](config-file-fa.md)
- Development of Component Interaction Between the FA Model and Stage Model
- [Component Interaction Between the FA Model and Stage Model](fa-stage-interaction-overview.md)
- [Starting a UIAbility from the FA Model](start-uiability-from-fa.md)
......
# AccessibilityExtensionAbility Development
The **AccessibilityExtensionAbility** module provides accessibility extension capabilities based on the **ExtensionAbility** framework. You can develop your accessibility applications by applying the **AccessibilityExtensionAbility** template to enhance usability.
> **Environment Requirements**
>
> IDE: DevEco Studio 3.0 Beta3 (3.0.0.900) or later
>
> SDK: API version 9 or later
>
> Model: stage
This document is organized as follows:
- [Creating an AccessibilityExtAbility File](#creating-an-accessibility-extension-service)
- [Processing an Accessibility Event](#processing-an-accessibility-event)
- [Declaring Capabilities of Accessibility Extension Services](#declaring-capabilities-of-accessibility-extension-services)
- [Enabling a Custom Accessibility Extension Service](#enabling-a-custom-accessibility-extension-service)
## Creating an Accessibility Extension Service
You can create an accessibility extension service by creating a project from scratch or adding the service to an existing project.
### Creating a Project
Perform the following steps in DevEco Studio:
1. From the upper left corner of DevEco Studio, choose **File** > **New** > **Create Project**.
2. By following the project creation wizard, click the **OpenHarmony** tab, select the **Empty Ability** template, and then click **Next**.
3. Set **Project type** to **Application**, **Compile API** (or **Compile SDK**, depending on the version used) to **9**, and **Model** to **Stage**, and then click **Finish**.
### Creating an AccessibilityExtAbility File
To add an accessibility extension service to a project, create the **AccessibilityExtAbility** folder in the **ets** folder of the project, create the **AccessibilityExtAbility.ts** file in the new folder, and add the following code to the new file:
```typescript
import AccessibilityExtensionAbility from '@ohos.application.AccessibilityExtensionAbility';
class AccessibilityExtAbility extends AccessibilityExtensionAbility {
onConnect() {
console.log('AccessibilityExtAbility onConnect');
}
onDisconnect() {
console.log('AccessibilityExtAbility onDisconnect');
}
onAccessibilityEvent(accessibilityEvent) {
console.log('AccessibilityExtAbility onAccessibilityEvent: ' + JSON.stringify(accessibilityEvent));
}
}
export default AccessibilityExtAbility;
```
The APIs defined in the file are as follows.
| API| Description|
| ---- | ---- |
| onConnect(): void | Called when a connection with the extension service is set up.|
| onDisconnect(): void | Called when the connection with the extension service is severed.|
| onAccessibilityEvent(event: AccessibilityEvent): void | Called when an accessibility event occurs|
## Processing an Accessibility Event
You can process the service logic for accessibility events in the **onAccessibilityEvent()** API. For details about the events, see [AccessibilityEvent](../reference/apis/js-apis-application-accessibilityExtensionAbility.md#accessibilityevent). The following code snippet uses the **pageStateUpdate** event as an example.
```typescript
onAccessibilityEvent(accessibilityEvent) {
console.log('AccessibilityExtAbility onAccessibilityEvent: ' + JSON.stringify(accessibilityEvent));
if (accessibilityEvent.eventType === 'pageStateUpdate') {
console.log('AccessibilityExtAbility onAccessibilityEvent: pageStateUpdate');
// TODO: Develop custom logic.
}
}
```
For an accessibility event, you can use the APIs of the [AccessibilityExtensionContext](../reference/apis/js-apis-inner-application-accessibilityExtensionContext.md) module to configure the concerned information, obtain root information, and inject gestures.
You can also process physical key events in the accessibility extension service. For details, see [onKeyEvent](../reference/apis/js-apis-application-accessibilityExtensionAbility.md#accessibilityextensionabilityonkeyevent).
## Declaring Capabilities of Accessibility Extension Services
After developing the custom logic for an accessibility extension service, you must add the configuration information of the service to the corresponding module-level **module.json5** file in the project directory. In the file, the **srcEntrance** tag indicates the path to the accessibility extension service. Make sure the value of the **type** tag is fixed at **accessibility**. Otherwise, the connection to the service will fail.
```json
"extensionAbilities": [
{
"name": "AccessibilityExtAbility",
"srcEntrance": "./ets/AccessibilityExtAbility/AccessibilityExtAbility.ts",
"label": "$string:MainAbility_label",
"description": "$string:MainAbility_desc",
"type": "accessibility",
"metadata": [
{
"name": "ohos.accessibleability",
"resource": "$profile:accessibility_config"
}
]
}
]
```
**accessibility_config** is the specific configuration of the accessibility extension service. You need to create the **accessibility_config.json** file in **resources/base/profile/** and declare the [capabilities](../reference/apis/js-apis-accessibility.md#capability) of the service in the file.
```json
{
"accessibilityCapabilities": [
"retrieve",
"gesture"
]
}
```
## Enabling a Custom Accessibility Extension Service
To enable or disable an accessibility extension service, run the following command:
- To enable the service: **accessibility enable -a AccessibilityExtAbility -b com.example.demo -c rg**
- To disable the service: **accessibility disable -a AccessibilityExtAbility -b com.example.demo**
In the preceding commands, **AccessibilityExtAbility** indicates the name of the accessibility extension service, **com.example.demo** indicates the bundle name, and **rg** indicates the capabilities (**r** is short for retrieve).
If the service is enabled or disabled successfully, the message "enable ability successfully" or "disable ability successfully" is displayed.
......@@ -3,7 +3,8 @@
When developing an application, you may need to configure certain tags to identify the application, such as the bundle name and application icon. This topic describes key tags that need to be configured during application development. Icons and labels are usually configured together. There is the application icon, application label, entry icon, and entry label, which correspond to the **icon** and **label** fields in the [app.json5 file](../quick-start/app-configuration-file.md) and [module.json5 file](../quick-start/module-configuration-file.md). The application icon and label are used in **Settings**. For example, they are displayed in the application list in **Settings**. The entry icon is displayed on the device's home screen after the application is installed. The entry icon maps to a [UIAbility](uiability-overview.md) component. Therefore, an application can have multiple entry icons and labels. When you touch one of them, the corresponding UIAbility page is displayed.
**Figure 1** Icons and labels
**Figure 1** Icons and labels
![application-component-configuration-stage](figures/application-component-configuration-stage.png)
......@@ -14,11 +15,11 @@ When developing an application, you may need to configure certain tags to identi
- **Configuring the application icon and label**
The application icon is specified by the **icon** field in the [app.json5 file](../quick-start/app-configuration-file.md) in the **AppScope** directory of the project. The **icon** field must be set to the index of an image so that the image is displayed as the application icon. The application icon is usually displayed in an application list, for example, the application list in **Settings**.
You must configure an icon and label for an application on the stage model.
The application label is specified by the **label** field in the [app.json5 file](../quick-start/app-configuration-file.md) in the **AppScope** module of the project. The **label** field specifies the application name displayed to users. It must be set to the index of a string resource.
The application icon is specified by the **icon** field in the [app.json5 file](../quick-start/app-configuration-file.md) in the **AppScope** directory of the project. The **icon** field must be set to the index of an image so that the image is displayed as the application icon.
The **icon** and **label** fields in the **app.json5** file are under **app**, as follows:
The application label is specified by the **label** field in the [app.json5 file](../quick-start/app-configuration-file.md) in the **AppScope** module of the project. The **label** field specifies the application name displayed to users. It must be set to the index of a string resource.
```json
{
......@@ -32,7 +33,11 @@ When developing an application, you may need to configure certain tags to identi
- **Configuring the entry icon and label**
The entry icon and label are configured by specifying **icon** and **label** under **abilities** in the [module.json5 file](../quick-start/module-configuration-file.md). For example, if you want to display the icon and label of the UIAbility component on the home screen, add **entity.system.home** to **entities** and **action.system.home** to **actions** under **skills**. If the preceding fields are configured for multiple UIAbility components of an application, multiple icons and labels are displayed on the home screen, corresponding to their respective UIAbility component.
On the stage model, you can configure an entry icon and label for each application component. The entry icon and label are displayed on the home screen.
The entry icon is configured by specifying **icon** under **abilities** in the [module.json5 file](../quick-start/module-configuration-file.md). For example, if you want to display the icon of the UIAbility component on the home screen, add **entity.system.home** to **entities** and **action.system.home** to **actions** under **skills**. If this field is configured for multiple UIAbility components of an application, multiple icons are displayed on the home screen, corresponding to their respective UIAbility component.
The entry label is configured by specifying **label** under **abilities** in the [module.json5 file](../quick-start/module-configuration-file.md). For example, if you want to display the icon of the UIAbility component on the home screen, add **entity.system.home** to **entities** and **action.system.home** to **actions** under **skills**. If this field is configured for multiple UIAbility components of an application, multiple labels are displayed on the home screen, corresponding to their respective UIAbility component.
```json
{
......@@ -58,6 +63,7 @@ When developing an application, you may need to configure certain tags to identi
}
}
```
- **Configuring application version declaration**
To declare the application version, configure the **versionCode** and **versionName** fields in the [app.json5 file](../quick-start/app-configuration-file.md) in the **AppScope** directory of the project. **versionCode** specifies the version number of the application. The value is a 32-bit non-negative integer. It is used only to determine whether a version is later than another version. A larger value indicates a later version. **versionName** provides the text description of the version number.
......@@ -69,4 +75,3 @@ When developing an application, you may need to configure certain tags to identi
- **Configuring the module permission**
The **requestPermission** field in the [module.json5 file](../quick-start/module-configuration-file.md) is used to configure the permission information required by the module to access the protected part of the system or other applications. This field declares the name of the permission to request, the reason for requesting the permission, and the scenario where the permission is used.
......@@ -14,7 +14,7 @@ The stage model is designed based on the following considerations, which make it
1. **Designed for complex applications**
- In the stage model, multiple application components share an ArkTS engine (VM running the programming language ArkTS) instance, making it easy for application components to share objects and status while requiring less memory.
- The object-oriented development mode makes the code of complex applications easy to read, maintain, and scale.
- The object-oriented development mode makes the code of complex applications easy to read, maintain, and scale.
2. **Native support for [cross-device migration](hop-cross-device-migration.md) and [multi-device collaboration](hop-multi-device-collaboration.md) at the application component level**
......@@ -48,13 +48,12 @@ In the stage model, multiple application components share the same ArkTS engine
The table below describes their differences in detail.
**Table 1** Differences between the FA model and stage model
**Table 1** Differences between the FA model and stage model
| Item| FA model| Stage model|
| -------- | -------- | -------- |
| **Application component**| 1. Component classification<br>- PageAbility: has the UI and supports user interaction. For details, see [PageAbility Component Overview](pageability-overview.md).<br>- ServiceAbility: provides background services and has no UI. For details, see [ServiceAbility Component Overview](serviceability-overview.md).<br>- DataAbility: provides the data sharing capability and has no UI. For details, see [DataAbility Component Overview](dataability-overview.md).<br>2. Development mode<br>Application components are specified by exporting anonymous objects and fixed entry files. You cannot perform derivation. It is inconvenient for capability expansion.| 1. Component classification<br>- UIAbility: has the UI and supports user interaction. For details, see [UIAbility Component Overview](uiability-overview.md).<br>- ExtensionAbility: provides extension capabilities (such as widget and input methods) for specific scenarios. For details, see [ExtensionAbility Component Overview](extensionability-overview.md).<br>2. Development mode<br>The object-oriented mode is used to provide open application components as classes. You can derive application components for capability expansion.|
| **Process model**| There are two types of processes:<br>1. Main process<br>2. Rendering process<br>For details, see [Process Model (FA Model)](process-model-fa.md). | There are three types of processes:<br>1. Main process<br>2. ExtensionAbility process<br>3. Rendering process<br>For details, see [Process Model (Stage Model)](process-model-stage.md). |
| **Thread model**| 1. ArkTS engine instance creation<br>A process can run multiple application component instances, and each application component instance runs in an independent ArkTS engine instance.<br>2. Thread model<br>Each ArkTS engine instance is created on an independent thread (non-main thread). The main thread does not have an ArkTS engine instance.<br>3. Intra-process object sharing: not supported.<br>For details, see [Thread Model (FA Model)](thread-model-fa.md). | 1. ArkTS engine instance creation<br>A process can run multiple application component instances, and all application component instances share one ArkTS engine instance.<br>2. Thread model<br>The ArkTS engine instance is created on the main thread.<br>3. Intra-process object sharing: supported.<br>For details, see [Thread Model (Stage Model)](thread-model-stage.md). |
| **Application component**| 1. Component classification<br>![fa-model-component](figures/fa-model-component.png)<br/>- PageAbility: has the UI and supports user interaction For details, see [PageAbility Component Overview](pageability-overview.md).<br>- ServiceAbility: provides background services and has no UI. For details, see [ServiceAbility Component Overview](serviceability-overview.md).<br>- DataAbility: provides the data sharing capability and has no UI. For details, see [DataAbility Component Overview](dataability-overview.md).<br>2. Development mode<br>Application components are specified by exporting anonymous objects and fixed entry files. You cannot perform derivation. It is inconvenient for capability expansion. | 1. Component classification<br>![stage-model-component](figures/stage-model-component.png)<br/> - UIAbility: has the UI and supports user interaction. For details, see [UIAbility Component Overview](uiability-overview.md).<br>- ExtensionAbility: provides extension capabilities (such as widget and input methods) for specific scenarios. For details, see [ExtensionAbility Component Overview](extensionability-overview.md).<br>2. Development mode<br>The object-oriented mode is used to provide open application components as classes. You can derive application components for capability expansion. |
| **Process model**| There are two types of processes:<br>1. Main process<br>2. Rendering process<br>For details, see [Process Model (FA Model)](process-model-fa.md).| There are three types of processes:<br>1. Main process<br>2. ExtensionAbility process<br>3. Rendering process<br>For details, see [Process Model (Stage Model)](process-model-stage.md).|
| **Thread model**| 1. ArkTS engine instance creation<br>A process can run multiple application component instances, and each application component instance runs in an independent ArkTS engine instance.<br>2. Thread model<br>Each ArkTS engine instance is created on an independent thread (non-main thread). The main thread does not have an ArkTS engine instance.<br>3. Intra-process object sharing: not supported.<br>For details, see [Thread Model (FA Model)](thread-model-fa.md).| 1. ArkTS engine instance creation<br>A process can run multiple application component instances, and all application component instances share one ArkTS engine instance.<br>2. Thread model<br>The ArkTS engine instance is created on the main thread.<br>3. Intra-process object sharing: supported.<br>For details, see [Thread Model (Stage Model)](thread-model-stage.md).|
| **Mission management model**| - A mission is created for each PageAbility component instance.<br>- Missions are stored persistently until the number of missions exceeds the maximum (customized based on the product configuration) or users delete missions.<br>- PageAbility components do not form a stack structure.<br>For details, see [Mission Management Scenarios](mission-management-overview.md).| - A mission is created for each UIAbility component instance.<br>- Missions are stored persistently until the number of missions exceeds the maximum (customized based on the product configuration) or users delete missions.<br>- UIAbility components do not form a stack structure.<br>For details, see [Mission Management Scenarios](mission-management-overview.md).|
| **Application configuration file**| The **config.json** file is used to describe the application, HAP, and application component information.<br>For details, see [Application Configuration File Overview (FA Model)](../quick-start/application-configuration-file-overview-fa.md).| The **app.json5** file is used to describe the application information, and the **module.json5** file is used to describe the HAP and application component information.<br>For details, see [Application Configuration File Overview (Stage Model)](../quick-start/application-configuration-file-overview-stage.md).|
# DataShareExtensionAbility
# DataShareExtensionAbility (System Applications Only)
DataShareExtensionAbility is available only for system application. It provides the data sharing capability. System applications can implement a DataShareExtensionAbility or access an existing DataShareExtensionAbility in the system. Third-party applications can only access an existing DataShareExtensionAbility. For details, see [DataShare Development](../database/database-datashare-guidelines.md).
DataShareExtensionAbility provides the data sharing capability. System applications can implement a DataShareExtensionAbility or access an existing DataShareExtensionAbility in the system. Third-party applications can only access an existing DataShareExtensionAbility. For details, see [DataShare Development](../database/database-datashare-guidelines.md).
......@@ -9,7 +9,7 @@ An [ExtensionAbilityType](../reference/apis/js-apis-bundleManager.md#extensionab
- [FormExtensionAbility](../reference/apis/js-apis-app-form-formExtensionAbility.md): ExtensionAbility component of the form type, which provides APIs related to widgets.
- [WorkSchedulerExtensionAbility](../reference/apis/js-apis-resourceschedule-workScheduler.md): ExtensionAbility component of the work_scheduler type, which provides APIs for registering, canceling, and querying Work Scheduler tasks.
- [WorkSchedulerExtensionAbility](../reference/apis/js-apis-WorkSchedulerExtensionAbility.md): ExtensionAbility component of the work_scheduler type, which provides APIs for registering, canceling, and querying Work Scheduler tasks.
- [InputMethodExtensionAbility](../reference/apis/js-apis-inputmethod.md): ExtensionAbility component of the input_method type, which provides an input method framework that can be used to hide the keyboard, obtain the list of installed input methods, display the dialog box for input method selection, and more.
......@@ -21,7 +21,7 @@ An [ExtensionAbilityType](../reference/apis/js-apis-bundleManager.md#extensionab
- [StaticSubscriberExtensionAbility](../reference/apis/js-apis-application-staticSubscriberExtensionAbility.md): ExtensionAbility component of the static_subscriber type, which provides APIs for static broadcast.
- [WindowExtensionAbility](../reference/apis/js-apis-application-windowExtensionAbility.md): ExtensionAbility component of the window type, which allows system applications to display UIs of other applications.
- [WindowExtensionAbility](../reference/apis/js-apis-application-windowExtensionAbility.md): ExtensionAbility component of the window type, which allows a system application to be embedded in and displayed over another application.
- [EnterpriseAdminExtensionAbility](../reference/apis/js-apis-EnterpriseAdminExtensionAbility.md): ExtensionAbility component of the enterprise_admin type, which provides APIs for processing enterprise management events, such as application installation events on devices and events indicating too many incorrect screen-lock password attempts.
......@@ -33,6 +33,7 @@ All types of ExtensionAbility components are started by the corresponding system
The following uses [InputMethodExtensionAbility](../reference/apis/js-apis-inputmethod.md) as an example. As shown in the figure below, when an application calls the InputMethodExtensionAbility component, the input method management service is called first. The input method management service starts the InputMethodExtensionAbility component, returns the component to the application, and starts to manage its lifecycle.
**Figure 1** Using the InputMethodExtensionAbility component
![ExtensionAbility-start](figures/ExtensionAbility-start.png)
......@@ -48,11 +49,11 @@ You do not need to care when to add or delete a widget. The lifecycle of the For
> **NOTE**
>
> For an application, all ExtensionAbility components of the same type run in an independent process, whereas UIAbility, ServiceExtensionAbility, and DataShareExtensionAbility run in another independent process. For details, see [Process Model (Stage Model)](process-model-stage.md).
>
>
> For example, an application has one UIAbility component, one ServiceExtensionAbility, one DataShareExtensionAbility, two FormExtensionAbility, and one ImeExtensionAbility. When the application is running, there are three processes:
>
>
> - UIAbility, ServiceExtensionAbility, and DataShareExtensionAbility run in an independent process.
>
>
> - The two FormExtensionAbility components run in an independent process.
>
>
> - The two ImeExtensionAbility components run in an independent process.
# Cross-Device Migration
# Cross-Device Migration (System Applications Only)]
## When to Use
Cross-device migration is available only for system applications. The main task is to migrate the current task (including the page control status) of an application to the target device so that the task can continue on it. Cross-device migration supports the following functionalities:
The main task of cross-device migration is to migrate the current task (including the page control status) of an application to the target device so that the task can continue on it. Cross-device migration supports the following functionalities:
- Storage and restoration of custom data
......
# Multi-device Collaboration
# Multi-device Collaboration (System Applications Only)
## When to Use
Multi-device coordination is available only for system applications. It involves the following scenarios:
Multi-device coordination involves the following scenarios:
- [Starting UIAbility and ServiceExtensionAbility Across Devices (No Data Returned)](#starting-uiability-and-serviceextensionability-across-devices-no-data-returned)
......
# ServiceExtensionAbility
[ServiceExtensionAbility](../reference/apis/js-apis-app-ability-serviceExtensionAbility.md) is an ExtensionAbility component of the service type that provides extension capabilities related to background services.
......@@ -17,9 +18,9 @@ Each type of ExtensionAbility has its own context. ServiceExtensionAbility has [
This topic describes how to use ServiceExtensionAbility in the following scenarios:
- [Implementing a Background Service](#implementing-a-background-service)
- [Implementing a Background Service (System Applications Only)](#implementing-a-background-service-system-applications-only)
- [Starting a Background Service](#starting-a-background-service)
- [Starting a Background Service (System Applications Only)](#starting-a-background-service-system-applications-only)
- [Connecting to a Background Service](#connecting-to-a-background-service)
......@@ -32,36 +33,32 @@ This topic describes how to use ServiceExtensionAbility in the following scenari
> - Third-party applications can connect to ServiceExtensionAbility provided by the system only when they gain focus in the foreground.
## Implementing a Background Service
## Implementing a Background Service (System Applications Only)
This feature applies only to system applications. [ServiceExtensionAbility](../reference/apis/js-apis-app-ability-serviceExtensionAbility.md) provides the callbacks **onCreate()**, **onRequest()**, **onConnect()**, **onDisconnect()**, and **onDestory()**. Override them as required. The following figure shows the lifecycle of ServiceExtensionAbility.
[ServiceExtensionAbility](../reference/apis/js-apis-app-ability-serviceExtensionAbility.md) provides the callbacks **onCreate()**, **onRequest()**, **onConnect()**, **onDisconnect()**, and **onDestory()**. Override them as required. The following figure shows the lifecycle of ServiceExtensionAbility.
**Figure 1** ServiceExtensionAbility lifecycle
![ServiceExtensionAbility-lifecycle](figures/ServiceExtensionAbility-lifecycle.png)
- **onCreate**
This callback is triggered when a service is created for the first time. You can perform initialization operations, for example, registering a common event listener.
This callback is triggered when a service is created for the first time. You can perform initialization operations, for example, registering a common event listener.
> **NOTE**
>
>
> If a service has been created, starting it again does not trigger the **onCreate()** callback.
- **onRequest**
This callback is triggered when another component calls the **startServiceExtensionAbility()** method to start the service. After being started, the service runs in the background.
This callback is triggered when another component calls the **startServiceExtensionAbility()** method to start the service. After being started, the service runs in the background.
- **onConnect**
This callback is triggered when another component calls the **connectServiceExtensionAbility()** method to connect to the service. In this method, a remote proxy object (IRemoteObject) is returned, through which the client communicates with the server by means of RPC.
This callback is triggered when another component calls the **connectServiceExtensionAbility()** method to connect to the service. In this method, a remote proxy object (IRemoteObject) is returned, through which the client communicates with the server by means of RPC.
- **onDisconnect**
This callback is triggered when a component calls the **disconnectServiceExtensionAbility()** method to disconnect from the service.
This callback is triggered when a component calls the **disconnectServiceExtensionAbility()** method to disconnect from the service.
- **onDestroy**
......@@ -167,9 +164,9 @@ To implement a background service, manually create a ServiceExtensionAbility com
```
## Starting a Background Service
## Starting a Background Service (System Applications Only)
This feature applies only to system applications. A system application uses the [startServiceExtensionAbility()](../reference/apis/js-apis-inner-application-uiAbilityContext.md#abilitycontextstartserviceextensionability) method to start a background service. The [onRequest()](../reference/apis/js-apis-app-ability-serviceExtensionAbility.md#serviceextensionabilityonrequest) callback is invoked, and the **Want** object passed by the caller is received through the callback. After the background service is started, its lifecycle is independent of that of the client. In other words, even if the client is destroyed, the background service can still run. Therefore, the background service must be stopped by calling [terminateSelf()](../reference/apis/js-apis-inner-application-serviceExtensionContext.md#serviceextensioncontextterminateself) when its work is complete. Alternatively, another component can call [stopServiceExtensionAbility()](../reference/apis/js-apis-inner-application-uiAbilityContext.md#abilitycontextstopserviceextensionability) to stop the background service.
A system application uses the [startServiceExtensionAbility()](../reference/apis/js-apis-inner-application-uiAbilityContext.md#abilitycontextstartserviceextensionability) method to start a background service. The [onRequest()](../reference/apis/js-apis-app-ability-serviceExtensionAbility.md#serviceextensionabilityonrequest) callback is invoked, and the **Want** object passed by the caller is received through the callback. After the background service is started, its lifecycle is independent of that of the client. In other words, even if the client is destroyed, the background service can still run. Therefore, the background service must be stopped by calling [terminateSelf()](../reference/apis/js-apis-inner-application-serviceExtensionContext.md#serviceextensioncontextterminateself) when its work is complete. Alternatively, another component can call [stopServiceExtensionAbility()](../reference/apis/js-apis-inner-application-uiAbilityContext.md#abilitycontextstopserviceextensionability) to stop the background service.
> **NOTE**
>
......
# Starting a Remote PageAbility
# Starting a Remote PageAbility (System Applications Only)
This feature applies only to system applications. The **startAbility()** method in the **featureAbility** class is used to start a remote PageAbility.
The **startAbility()** method in the **featureAbility** class is used to start a remote PageAbility.
In addition to **'\@ohos.ability.featureAbility'**, you must import **'\@ohos.distributedHardware.deviceManager'**, which provides account-independent distributed device networking capabilities. Then you can use **getTrustedDeviceListSync** of the **DeviceManager** module to obtain the remote device ID and pass the remote device ID in the **want** parameter for starting the remote PageAbility.
......
......@@ -17,7 +17,7 @@ This topic describes the UIAbility interaction modes in the following scenarios.
- [Starting a Specified Page of UIAbility](#starting-a-specified-page-of-uiability)
- [Using Ability Call to Implement UIAbility Interaction](#using-ability-call-to-implement-uiability-interaction)
- [Using Ability Call to Implement UIAbility Interaction (System Applications Only)](#using-ability-call-to-implement-uiability-interaction-system-applications-only)
## Starting UIAbility in the Same Application
......@@ -416,9 +416,9 @@ In summary, when a UIAbility instance of application A has been created and the
> When the [launch type of the callee UIAbility](uiability-launch-type.md) is set to **standard**, a new instance is created each time the callee UIAbility is started. In this case, the [onNewWant()](../reference/apis/js-apis-app-ability-uiAbility.md#abilityonnewwant) callback will not be invoked.
## Using Ability Call to Implement UIAbility Interaction
## Using Ability Call to Implement UIAbility Interaction (System Applications Only)
This feature applies only to system applications. Ability call is an extension of the UIAbility capability. It enables the UIAbility to be invoked by and communicate with external systems. The UIAbility invoked can be either started in the foreground or created and run in the background. You can use the ability call to implement data sharing between two UIAbility instances (caller ability and callee ability) through IPC.
Ability call is an extension of the UIAbility capability. It enables the UIAbility to be invoked by and communicate with external systems. The UIAbility invoked can be either started in the foreground or created and run in the background. You can use the ability call to implement data sharing between two UIAbility instances (caller ability and callee ability) through IPC.
The core API used for the ability call is **startAbilityByCall**, which differs from **startAbility** in the following ways:
......
......@@ -3,7 +3,7 @@
## Overview
UIAbility has the UI and is mainly used for user interaction.
UIAbility is a type of application component that provides the UI for user interaction.
UIAbility is the basic unit scheduled by the system and provides a window for applications to draw UIs. A UIAbility component can implement a functional module through multiple pages. Each UIAbility component instance corresponds to a mission in **Recents**.
......@@ -32,8 +32,3 @@ To enable an application to properly use a UIAbility component, declare the UIAb
}
}
```
> **NOTE**
>
> For the ability composition, see [Adding an Ability to a Module](https://developer.harmonyos.com/en/docs/documentation/doc-guides-V3/ohos-adding-ability-0000001218280664-V3).
# WindowExtensionAbility
[WindowExtensionAbility](../reference/apis/js-apis-application-windowExtensionAbility.md) is a type of ExtensionAbility component that allows a system application to be embedded in and displayed over another application.
The WindowExtensionAbility component must be used together with the [AbilityComponent](../reference/arkui-ts/ts-container-ability-component.md) to process services of the started application. WindowExtensionAbility is run in connection mode. A system application must use the AbilityComponent to start the WindowExtensionAbility component.
Each ExtensionAbility has its own context. For WindowExtensionAbility,
the context is [WindowExtensionContext](../reference/apis/js-apis-inner-application-windowExtensionContext.md).
> **NOTE**
>
> **WindowExtensionAbility** is a system API. To embed a third-party application in another application and display it over the application, switch to the full SDK by following the instructions provided in [Guide to Switching to Full SDK](../../application-dev/quick-start/full-sdk-switch-guide.md).
>
## Setting an Embedded Ability (System Applications Only)
The **WindowExtensionAbility** class provides **onConnect()**, **onDisconnect()**, and **onWindowReady()** lifecycle callbacks, which can be overridden.
- The **onWindowReady()** callback is invoked when a window is created for the ability.
- The **onConnect()** callback is invoked when the AbilityComponent corresponding to the window connects to the ability.
- The **onDisconnect()** callback is invoked when the AbilityComponent disconnects from the ability.
**How to Develop**
To implement an embedded application, manually create a WindowExtensionAbility in DevEco Studio as follows:
1. In the **ets** directory of the **Module** project, right-click and choose **New > Directory** to create a directory named **WindowExtAbility**.
2. Right-click the **WindowExtAbility** directory, and choose **New > TypeScript File** to create a file named **WindowExtAbility.ts**.
3. Open the **WindowExtAbility.ts** file and import the dependency package of **WindowExtensionAbility**. Customize a class that inherits from **WindowExtensionAbility** and implement the **onWindowReady()**, **onConnect()**, and **onDisconnect()** lifecycle callbacks.
```ts
import Extension from '@ohos.application.WindowExtensionAbility'
export default class WindowExtAbility extends Extension {
onWindowReady(window) {
window.loadContent('WindowExtAbility/pages/index1').then(() => {
window.getProperties().then((pro) => {
console.log("WindowExtension " + JSON.stringify(pro));
})
window.show();
})
}
onConnect(want) {
console.info('JSWindowExtension onConnect ' + want.abilityName);
}
onDisconnect(want) {
console.info('JSWindowExtension onDisconnect ' + want.abilityName);
}
}
```
4. Register the WindowExtensionAbility in the [module.json5 file](../quick-start/module-configuration-file.md) corresponding to the **Module** project. Set **type** to **"window"** and **srcEntrance** to the code path of the ExtensionAbility component.
```json
{
"module": {
"extensionAbilities": [
{
"name": "WindowExtAbility",
"srcEntrance": "./ets/WindowExtAbility/WindowExtAbility.ts",
"icon": "$media:icon",
"description": "WindowExtension",
"type": "window",
"visible": true,
}
],
}
}
```
## Starting an Embedded Ability (System Applications Only)
System applications can load the created WindowExtensionAbility through the AbilityComponent.
**How to Develop**
1. To connect to an embedded application, add the AbilityComponent to the corresponding pages in the DevEco Studio project.
2. Set **bundleName** and **abilityName** in the AbilityComponent.
3. Set the width and height. The sample code is as follows:
```ts
@Entry
@Component
struct Index {
@State message: string = 'Hello World'
build() {
Row() {
Column() {
AbilityComponent({ abilityName: "WindowExtAbility", bundleName: "com.example.WindowExtAbility"})
.width(500)
.height(500)
}
.width('100%')
}
.height('100%')
.backgroundColor(0x64BB5c)
}
}
```
......@@ -108,7 +108,7 @@ You write a UI test script based on the unit test framework, adding the invoking
In this example, the UI test script is written based on the preceding unit test script. First, add the dependency package, as shown below:
```js
import {UiDriver,BY,UiComponent,MatchPattern} from '@ohos.uitest'
import {Driver,ON,Component,MatchPattern} from '@ohos.uitest'
```
Then, write specific test code. Specifically, implement the click action on the started application page and add checkpoint check cases.
......@@ -131,16 +131,16 @@ export default function abilityTest() {
expect(Ability.context.abilityInfo.name).assertEqual('MainAbility');
})
//ui test code
//init uidriver
var driver = await UiDriver.create();
//init driver
var driver = await Driver.create();
await driver.delayMs(1000);
//find button by text 'Next'
var button = await driver.findComponent(BY.text('Next'));
var button = await driver.findComponent(ON.text('Next'));
//click button
await button.click();
await driver.delayMs(1000);
//check text
await driver.assertComponentExist(BY.text('after click'));
await driver.assertComponentExist(ON.text('after click'));
await driver.pressBack();
done();
})
......
......@@ -16,16 +16,18 @@ Call **createDistributedObject()** to create a distributed data object instance.
**Table 1** API for creating a distributed data object instance
| Package| API| Description|
| Bundle Name| API| Description|
| -------- | -------- | -------- |
| ohos.data.distributedDataObject| createDistributedObject(source: object): DistributedObject | Creates a distributed data object instance for data operations.<br>- **source**: attributes of the distributed data object to set.<br>- **DistributedObject**: returns the distributed data object created. |
| ohos.data.distributedDataObject| createDistributedObject(source: object): DistributedObject | Creates a distributed data object instance for data operations.<br>- **source**: attributes of the distributed data object to create.<br>- **DistributedObject**: returns the distributed data object created.|
### Generating a Session ID
Call **genSessionId()** to generate a session ID randomly. The generated session ID can be used to set the session ID of a distributed data object.
**Table 2** API for generating a session ID randomly
| Package| API| Description|
| Bundle Name| API| Description|
| -------- | -------- | -------- |
| ohos.data.distributedDataObject| genSessionId(): string | Generates a session ID, which can be used as the session ID of a distributed data object.|
......@@ -34,9 +36,10 @@ Call **genSessionId()** to generate a session ID randomly. The generated session
Call **setSessionId()** to set a session ID for a distributed data object. The session ID is a unique identifier for one collaboration across devices. The distributed data objects to be synchronized must be associated with the same session ID.
**Table 3** API for setting a session ID
| Class| API| Description|
| -------- | -------- | -------- |
| DistributedDataObject | setSessionId(sessionId?: string): boolean | Sets a session ID for a distributed data object.<br>**sessionId**: session ID of a distributed data object in a trusted network. To remove a distributed data object from the network, set this parameter to "" or leave it empty. |
| DistributedDataObject | setSessionId(sessionId?: string): boolean | Sets a session ID for this distributed data object.<br>**sessionId**: ID of the distributed data object on a trusted network. To remove a distributed data object from the network, set this parameter to "" or leave it empty.|
### Observing Data Changes
......@@ -54,6 +57,7 @@ Call **on()** to subscribe to data changes of a distributed data object. When th
Call **on()** to subscribe to status changes of a distributed data object. The status can be online or offline. When the status changes, a callback will be invoked to return the status. You can use **off()** to unsubscribe from the status changes.
**Table 5** APIs for observing status changes of a distributed data object
| Class| API| Description|
| -------- | -------- | -------- |
| DistributedDataObject| on(type: 'status', callback: Callback<{ sessionId: string, networkId: string, status: 'online' \| 'offline' }>): void | Subscribes to the status changes of a distributed data object.|
......@@ -87,89 +91,105 @@ The following example shows how to implement distributed data object synchroniza
```js
import distributedObject from '@ohos.data.distributedDataObject';
```
2. Apply for the permission.
Add the permissions required (FA model) to the **config.json** file. The sample code is as follows:
Add the required permission (FA model) to the **config.json** file.
```json
{
```json
{
"module": {
"reqPermissions": [
{
"name": "ohos.permission.DISTRIBUTED_DATASYNC"
"name": "ohos.permission.DISTRIBUTED_DATASYNC"
}
]
}
}
```
}
```
For the apps based on the stage model, see [Declaring Permissions](../security/accesstoken-guidelines.md#stage-model).
This permission must also be granted by the user when the application is started for the first time. The sample code is as follows:
This permission must also be granted by the user when the application is started for the first time.
```js
import featureAbility from '@ohos.ability.featureAbility';
function grantPermission() {
console.info('grantPermission');
let context = featureAbility.getContext();
context.requestPermissionsFromUser(['ohos.permission.DISTRIBUTED_DATASYNC'], 666, function (result) {
console.info(`result.requestCode=${result.requestCode}`)
})
console.info('end grantPermission');
}
grantPermission();
```
```js
// FA model
import featureAbility from '@ohos.ability.featureAbility';
function grantPermission() {
console.info('grantPermission');
let context = featureAbility.getContext();
context.requestPermissionsFromUser(['ohos.permission.DISTRIBUTED_DATASYNC'], 666, function (result) {
console.info(`requestPermissionsFromUser CallBack`);
})
console.info('end grantPermission');
}
grantPermission();
```
```ts
// Stage model
import UIAbility from '@ohos.app.ability.UIAbility';
let context = null;
class EntryAbility extends UIAbility {
onWindowStageCreate(windowStage) {
context = this.context;
}
}
function grantPermission() {
let permissions = ['ohos.permission.DISTRIBUTED_DATASYNC'];
context.requestPermissionsFromUser(permissions).then((data) => {
console.info('success: ${data}');
}).catch((error) => {
console.error('failed: ${error}');
});
}
grantPermission();
```
3. Obtain a distributed data object instance.
The sample code is as follows:
```js
var local_object = distributedObject.createDistributedObject({
name: undefined,
age: undefined,
isVis: true,
parent: undefined,
list: undefined
let localObject = distributedObject.createDistributedObject({
name: undefined,
age: undefined,
isVis: true,
parent: undefined,
list: undefined
});
var sessionId = distributedObject.genSessionId();
let sessionId = distributedObject.genSessionId();
```
4. Add the distributed data object instance to a network for data synchronization. The data objects in the synchronization network include the local and remote objects.
The sample code is as follows:
```js
// Local object
var local_object = distributedObject.createDistributedObject({
name: "jack",
age: 18,
isVis: true,
parent: { mother: "jack mom", father: "jack Dad" },
list: [{ mother: "jack mom" }, { father: "jack Dad" }]
let localObject = distributedObject.createDistributedObject({
name: "jack",
age: 18,
isVis: true,
parent: { mother: "jack mom", father: "jack Dad" },
list: [{ mother: "jack mom" }, { father: "jack Dad" }]
});
local_object.setSessionId(sessionId);
localObject.setSessionId(sessionId);
// Remote object
var remote_object = distributedObject.createDistributedObject({
name: undefined,
age: undefined,
isVis: true,
parent: undefined,
list: undefined
let remoteObject = distributedObject.createDistributedObject({
name: undefined,
age: undefined,
isVis: true,
parent: undefined,
list: undefined
});
// After learning that the local device goes online, the remote object synchronizes data. That is, name changes to jack and age to 18.
remote_object.setSessionId(sessionId);
remoteObject.setSessionId(sessionId);
```
5. Observe the data changes of the distributed data object.
You can subscribe to data changes of the remote object. When the data in the remote object changes, a callback will be called to return the data changes.
The sample code is as follows:
5. Observe the data changes of the distributed data object. You can subscribe to data changes of the remote object. When the data in the remote object changes, a callback will be invoked to return the data changes.
```js
function changeCallback(sessionId, changeData) {
......@@ -177,109 +197,86 @@ The following example shows how to implement distributed data object synchroniza
if (changeData != null && changeData != undefined) {
changeData.forEach(element => {
console.info("changed !" + element + " " + local_object[element]);
});
}
console.info("changed !" + element + " " + localObject[element]);
});
}
}
// To refresh the page in changeCallback, correctly bind (this) to the changeCallback.
local_object.on("change", this.changeCallback.bind(this));
localObject.on("change", this.changeCallback.bind(this));
```
6. Modify attributes of the distributed data object.
The object attributes support basic data types (such as number, Boolean, and string) and complex data types (array and nested basic types).
The sample code is as follows:
6. Modify attributes of the distributed data object. The object attributes support basic data types (such as number, Boolean, and string) and complex data types (array and nested basic types).
```js
local_object.name = "jack";
local_object.age = 19;
local_object.isVis = false;
local_object.parent = { mother: "jack mom", father: "jack Dad" };
local_object.list = [{ mother: "jack mom" }, { father: "jack Dad" }];
localObject.name = "jack";
localObject.age = 19;
localObject.isVis = false;
localObject.parent = { mother: "jack mom", father: "jack Dad" };
localObject.list = [{ mother: "jack mom" }, { father: "jack Dad" }];
```
> **NOTE**<br>
> For the distributed data object of the complex type, only the root attribute can be modified. The subordinate attributes cannot be modified. Example:
> For the distributed data object of the complex type, only the root attribute can be modified. The subordinate attributes cannot be modified.
```js
// Supported modification.
local_object.parent = { mother: "mom", father: "dad" };
localObject.parent = { mother: "mom", father: "dad" };
// Modification not supported.
local_object.parent.mother = "mom";
localObject.parent.mother = "mom";
```
7. Access the distributed data object.
Obtain the distributed data object attributes, which are the latest data on the network.
The sample code is as follows:
7. Access the distributed data object.<br>Obtain the distributed data object attributes, which are the latest data on the network.
```js
console.info("name " + local_object["name"]);
console.info("name " + localObject["name"]);
```
8. Unsubscribe from data changes.
You can specify the callback to unregister. If you do not specify the callback, all data change callbacks of the distributed data object will be unregistered.
The sample code is as follows:
8. Unsubscribe from data changes. You can specify the callback to unregister. If you do not specify the callback, all data change callbacks of the distributed data object will be unregistered.
```js
// Unregister the specified data change callback.
local_object.off("change", changeCallback);
localObject.off("change", changeCallback);
// Unregister all data change callbacks.
local_object.off("change");
localObject.off("change");
```
9. Subscribe to the status (online/offline) changes of the distributed data object. A callback will be invoked to report the status change when the target distributed data object goes online or offline.
The sample code is as follows:
9. Subscribe to status changes of this distributed data object. A callback will be invoked to report the status change when the target distributed data object goes online or offline.
```js
function statusCallback(sessionId, networkId, status) {
this.response += "status changed " + sessionId + " " + status + " " + networkId;
}
function statusCallback(sessionId, networkId, status) {
this.response += "status changed " + sessionId + " " + status + " " + networkId;
}
local_object.on("status", this.statusCallback);
localObject.on("status", this.statusCallback);
```
10. Save a distributed data object and delete it.
```js
// Save a distributed data object.
g_object.save("local").then((result) => {
console.info("save sessionId " + result.sessionId);
console.info("save version " + result.version);
console.info("save deviceId " + result.deviceId);
localObject.save("local").then((result) => {
console.info("save sessionId " + result.sessionId);
console.info("save version " + result.version);
console.info("save deviceId " + result.deviceId);
}, (result) => {
console.info("save local failed.");
console.info("save local failed.");
});
// Delete a distributed data object..
g_object.revokeSave().then((result) => {
console.info("revokeSave success.");
// Revoke the data saving operation.
localObject.revokeSave().then((result) => {
console.info("revokeSave success.");
}, (result) => {
console.info("revokeSave failed.");
console.info("revokeSave failed.");
});
```
11. Unsubscribe from the status changes of the distributed data object.
You can specify the callback to unregister. If you do not specify the callback, all status change callbacks of this distributed data object will be unregistered.
The sample code is as follows:
11. Unsubscribe from the status changes of this distributed data object. You can specify the callback to unregister. If you do not specify the callback, this API unregisters all status change callbacks of this distributed data object.
```js
// Unregister the specified status change callback.
local_object.off("status", this.statusCallback);
localObject.off("status", this.statusCallback);
// Unregister all status change callbacks.
local_object.off("status");
localObject.off("status");
```
12. Remove a distributed data object from the synchronization network. Data changes on the local object will not be synchronized to the removed distributed data object.
The sample code is as follows:
12. Remove the distributed data object from the synchronization network. The data changes on the local object will not be synchronized to the removed distributed data object.
```js
local_object.setSessionId("");
localObject.setSessionId("");
```
......@@ -24,7 +24,7 @@ Obtain a **Preferences** instance for data operations. A **Preferences** instanc
**Table 1** API for obtaining a **Preferences** instance
| Package | API | Description |
| Bundle Name | API | Description |
| --------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| ohos.data.preferences | getPreferences(context: Context, name: string): Promise\<Preferences> | Obtains a **Preferences** instance.|
......@@ -75,7 +75,7 @@ You can use the following APIs to delete a **Preferences** instance or data file
**Table 6** APIs for deleting **Preferences**
| Package | API | Description |
| Bundle Name | API | Description |
| --------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| ohos.data.preferences | deletePreferences(context: Context, name: string): Promise\<void> | Deletes a **Preferences** instance from the memory and its files from the device.|
| ohos.data.preferences | removePreferencesFromCache(context: Context, name: string): Promise\<void> | Removes a **Preferences** instance from the memory to release memory. |
......@@ -113,22 +113,20 @@ You can use the following APIs to delete a **Preferences** instance or data file
```ts
// Obtain the context.
import UIAbility from '@ohos.app.ability.UIAbility'
let context = null;
import UIAbility from '@ohos.app.ability.UIAbility';
let preferences = null;
export default class EntryAbility extends UIAbility {
onWindowStageCreate(windowStage){
context = this.context;
onWindowStageCreate(windowStage) {
let promise = data_preferences.getPreferences(this.context, 'mystore');
promise.then((pref) => {
preferences = pref;
}).catch((err) => {
console.info("Failed to get the preferences.");
})
}
}
let promise = data_preferences.getPreferences(context, 'mystore');
promise.then((pref) => {
preferences = pref;
}).catch((err) => {
console.info("Failed to get the preferences.");
})
```
3. Write data.
......@@ -200,6 +198,6 @@ You can use the following APIs to delete a **Preferences** instance or data file
proDelete.then(() => {
console.info("Deleted data successfully.");
}).catch((err) => {
console.info("Failed to delete data. Cause: " + err);
console.info("Failed to delete data. Cause: " + err);
})
```
......@@ -36,5 +36,5 @@ Currently you can have access to statistics on the application usage, and the no
Deregister the callback for application group changes.
## Required Permissions
- Before calling the following system APIs, you need to apply for the **ohos.permission.BUNDLE_ACTIVE_INFO** permission: **queryBundleActiveStates**, **queryBundleStateInfos**, **queryBundleStateInfoByInterval**, **queryBundleActiveEventStates**, **queryAppNotificationNumber**, **queryAppUsagePriorityGroup(bundleName?)**, **setBundleGroup**, **registerGroupCallBack**, and **unRegisterGroupCallBack**.
- This permission is not required for calling third-party APIs: **queryCurrentBundleActiveStates**, **queryAppUsagePriorityGroup()**, and **isIdleState**.
- Before calling the following system APIs, you must request the **ohos.permission.BUNDLE_ACTIVE_INFO** permission: **isIdleState**, **queryBundleEvents**, **queryBundleStatsInfos**, **queryBundleStatsInfoByInterval**, **queryDeviceEventStats**, **queryNotificationEventStats**, **queryAppGroup(bundleName)**, **setAppGroup**, **registerAppGroupCallBack**, **unregisterAppGroupCallBack**, **queryModuleUsageRecords**, and **queryModuleUsageRecords(maxnum)**.
- You do not need to request this permission before calling **queryCurrentBundleEvents** and **queryAppGroup()**, which are third-party APIs.
......@@ -225,7 +225,7 @@ import usageStatistics from '@ohos.resourceschedule.usageStatistics';
}
```
7. Check whether the application specified by **bundleName** is in the idle state. This requires no permission to be configured. A third-party application can only check the idle status of itself.
7. Check whether the application specified by **bundleName** is in the idle state. This requires the **ohos.permission.BUNDLE_ACTIVE_INFO** permission to be configured.
```js
import usageStatistics from '@ohos.resourceschedule.usageStatistics'
......@@ -531,4 +531,4 @@ import usageStatistics from '@ohos.resourceschedule.usageStatistics';
} catch (error) {
console.log('BUNDLE_ACTIVE unregisterAppGroupCallBack throw error, code is: ' + error.code + ',message is: ' + error.message);
}
```
\ No newline at end of file
```
# Device
# Device Management
- USB Service
- [USB Service Overview](usb-overview.md)
......@@ -17,3 +17,5 @@
- Update Service
- [Sample Server Overview](sample-server-overview.md)
- [Sample Server Development](sample-server-guidelines.md)
- Stationary
- [stationary Development](stationary-guidelines.md)
# Stationary Development
## When to Use
An application can call the **Stationary** module to obtain the device status, for example, whether the device is absolutely or relatively still.
For details about the APIs, see [Stationary](../reference/apis/js-apis-stationary.md).
## Device Status Type Parameters
| Name| Description|
| -------- | -------- |
| still | Absolutely still.|
| relativeStill | Relatively still.|
## Parameters for Subscribing to Device Status events
| Name | Value | Description |
| ------------------------------ | ---- | ---------------------------------------- |
| ENTER | 1 | Event indicating entering device status. |
| EXIT | 2 | Event indicating exiting device status.|
| ENTER_EXIT | 3 | Event indicating entering and exiting device status.|
## Returned Device Status Parameters
| Name | Value | Description |
| ------------------------------ | ---- | ---------------------------------------- |
| ENTER | 1 | Entering device status. |
| EXIT | 2 | Exiting device status.|
## Available APIs
| Module | Name | Description |
| ------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| ohos.stationary | on(activity: ActivityType, event: ActivityEvent, reportLatencyNs: number, callback: Callback&lt;ActivityResponse&gt;): void | Subscribes to the device status. This API uses an asynchronous callback to return the result.|
| ohos.stationary | once(activity: ActivityType, callback: Callback&lt;ActivityResponse&gt;): void | Obtains the device status. This API uses an asynchronous callback to return the result.|
| ohos.stationary | off(activity: ActivityType, event: ActivityEvent, callback?: Callback&lt;ActivityResponse&gt;): void | Unsubscribes from the device status. |
## Constraints
The device must support the acceleration sensor.
## How to Develop
1. Subscribe to the event indicating entering the absolute still state, and the event is reported every 1 second.
```js
import stationary from '@ohos.stationary';
var reportLatencyNs = 1000000000;
try {
stationary.on('still', stationary.ActivityEvent.ENTER, reportLatencyNs, (data) => {
console.log('data='+ JSON.stringify(data));
})
} catch (err) {
console.error('errCode: ' + err.code + ' ,msg: ' + err.message);
}
```
2. Obtain the event indicating entering the absolute still state.
```js
import stationary from '@ohos.stationary';
try {
stationary.once('still', (data) => {
console.log('data='+ JSON.stringify(data));
})
} catch (err) {
console.error('errCode: ' + err.code + ' ,msg: ' + err.message);
}
```
3. Unsubscribe from the event indicating entering the absolute still state.
```js
import stationary from '@ohos.stationary';
try {
stationary.off('still', stationary.ActivityEvent.ENTER, (data) => {
console.log('data='+ JSON.stringify(data));
})
} catch (err) {
console.error('errCode: ' + err.code + ' ,msg: ' + err.message);
}
```
# DFX
- Application Event Logging
- [Development of Application Event Logging](hiappevent-guidelines.md)
- Performance Tracing
- [Development of Performance Tracing](hitracemeter-guidelines.md)
- Distributed Call Chain Tracing
- [Development of Distributed Call Chain Tracing](hitracechain-guidelines.md)
- [Development of Application Event Logging](hiappevent-guidelines.md)
- [Development of Performance Tracing](hitracemeter-guidelines.md)
- [Development of Distributed Call Chain Tracing](hitracechain-guidelines.md)
- Error Management
- [Development of Error Manager](errormanager-guidelines.md)
- [Development of Application Recovery](apprecovery-guidelines.md)
\ No newline at end of file
- [Development of Application Recovery](apprecovery-guidelines.md)
......@@ -134,7 +134,7 @@ After the callback triggers **appRecovery.saveAppState()**, **onSaveState(state,
- Restore data.
After the callback triggers **appRecovery.restartApp()**, the application is restarted. After the restart, **onSaveState(state, wantParams)** of **MainAbility** is called, and the saved data is in **parameters** of **want**.
After the callback triggers **appRecovery.restartApp()**, the application is restarted. After the restart, **onCreate(want, launchParam)** of **MainAbility** is called, and the saved data is in **parameters** of **want**.
```ts
storage: LocalStorage
......
......@@ -37,10 +37,10 @@ When an asynchronous callback is used, the return value can be processed directl
## Development Example
```ts
import Ability from '@ohos.application.Ability'
import errorManager from '@ohos.application.errorManager'
import errorManager from '@ohos.app.ability.errorManager';
var registerId = -1;
var callback = {
let registerId = -1;
let callback = {
onUnhandledException: function (errMsg) {
console.log(errMsg);
}
......@@ -48,13 +48,13 @@ var callback = {
export default class MainAbility extends Ability {
onCreate(want, launchParam) {
console.log("[Demo] MainAbility onCreate")
registerId = errorManager.registerErrorObserver(callback);
registerId = errorManager.on("error", callback);
globalThis.abilityWant = want;
}
onDestroy() {
console.log("[Demo] MainAbility onDestroy")
errorManager.unregisterErrorObserver(registerId, (result) => {
errorManager.off("error", registerId, (result) => {
console.log("[Demo] result " + result.code + ";" + result.message)
});
}
......
......@@ -9,11 +9,10 @@ FilePicker provides the following modes:
## Development Guidelines
> **NOTE**
>
> FilePicker supports only the applications developed based on the stage model.
> For details about the stage model, see [Interpretation of the Application Model](../application-models/application-model-description.md).
You can use [AbilityContext.startAbilityForResult(want, options)](../reference/apis/js-apis-ability-context.md##abilitycontextstartabilityforresult-1) with different parameters to start different FilePicker modes.
You can use [AbilityContext.startAbilityForResult(want, options)](../reference/apis/js-apis-inner-application-uiAbilityContext.md#uiabilitycontextstartabilityforresult-1) with different parameters to start FilePicker in different modes.
You need to use [Want](../reference/apis/js-apis-application-want.md) to specify **bundleName** and **abilityName** to start FilePicker. For details, see the following sample code.
......@@ -32,8 +31,7 @@ ArkTS sample code:
// Start FilePicker to select a file.
globalThis.context.startAbilityForResult(
{
bundleName: "com.ohos.filepicker",
abilityName: "EntryAbility",
action: "ohos.want.action.OPEN_FILE",
parameters: {
'startMode': 'choose', //choose or save
}
......@@ -44,8 +42,7 @@ globalThis.context.startAbilityForResult(
// Start FilePicker to save a file.
globalThis.context.startAbilityForResult(
{
bundleName: "com.ohos.filepicker",
abilityName: "EntryAbility",
action: "ohos.want.action.CREATE_FILE",
parameters: {
'startMode': 'save', //choose or save
'saveFile': 'test.jpg',
......
......@@ -39,19 +39,19 @@ The following describes how to create an album named **myAlbum**.
```ts
async function example() {
let mediaType = mediaLibrary.MediaType.IMAGE;
let DIR_IMAGE = mediaLibrary.DirectoryType.DIR_IMAGE;
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const path = await media.getPublicDirectory(DIR_IMAGE);
// myAlbum is the path for storing the new file and the name of the new album.
media.createAsset(mediaType, 'test.jpg', path + 'myAlbum/', (err, fileAsset) => {
if (fileAsset != undefined) {
console.info('createAlbum successfully, message = ' + fileAsset);
} else {
console.info('createAlbum failed, message = ' + err);
}
});
let mediaType = mediaLibrary.MediaType.IMAGE;
let DIR_IMAGE = mediaLibrary.DirectoryType.DIR_IMAGE;
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const path = await media.getPublicDirectory(DIR_IMAGE);
// myAlbum is the path for storing the new file and the name of the new album.
media.createAsset(mediaType, 'test.jpg', path + 'myAlbum/', (err, fileAsset) => {
if (fileAsset === undefined) {
console.error('createAlbum failed, message = ' + err);
} else {
console.info('createAlbum successfully, message = ' + JSON.stringify(fileAsset));
}
});
}
```
......@@ -75,20 +75,20 @@ The following describes how to rename the album **newAlbum**.
```ts
async function example() {
let AlbumNoArgsfetchOp = {
selections: '',
selectionArgs: [],
};
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
let albumList = await media.getAlbums(AlbumNoArgsfetchOp);
let album = albumList[0];
album.albumName = 'newAlbum';
// Void callback.
album.commitModify().then(function() {
console.info("albumRename successfully");
}).catch(function(err){
console.info("albumRename failed with error: " + err);
});
let AlbumNoArgsfetchOp = {
selections: '',
selectionArgs: [],
};
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
let albumList = await media.getAlbums(AlbumNoArgsfetchOp);
let album = albumList[0];
album.albumName = 'newAlbum';
// Void callback.
album.commitModify().then(() => {
console.info("albumRename successfully");
}).catch((err) => {
console.error("albumRename failed with error: " + err);
});
}
```
......@@ -37,15 +37,15 @@ The following describes how to obtain the public directory that stores camera fi
```ts
async function example(){
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
let DIR_CAMERA = mediaLibrary.DirectoryType.DIR_CAMERA;
const dicResult = await media.getPublicDirectory(DIR_CAMERA);
if (dicResult == 'Camera/') {
console.info('mediaLibraryTest : getPublicDirectory passed');
} else {
console.info('mediaLibraryTest : getPublicDirectory failed');
}
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
let DIR_CAMERA = mediaLibrary.DirectoryType.DIR_CAMERA;
const dicResult = await media.getPublicDirectory(DIR_CAMERA);
if (dicResult == 'Camera/') {
console.info('mediaLibraryTest : getPublicDirectory passed');
} else {
console.error('mediaLibraryTest : getPublicDirectory failed');
}
}
```
......@@ -59,47 +59,52 @@ Users can access files stored in the public directories through the system appli
You can call [mediaLibrary.FileAsset.open](../reference/apis/js-apis-medialibrary.md#open8-1) to open a file in a public directory.
You can call [fileio.open](../reference/apis/js-apis-fileio.md#fileioopen7) to open a file in the application sandbox. The sandbox directory can be accessed only through the application context.
You can call [fs.open](../reference/apis/js-apis-file-fs.md#fsopen) to open a file in the application sandbox. The sandbox directory can be accessed only through the application context.
**Prerequisites**
- You have obtained a **MediaLibrary** instance.
- You have granted the permission **ohos.permission.WRITE_MEDIA**.
- You have imported the module [@ohos.fileio](../reference/apis/js-apis-fileio.md) in addition to @ohos.multimedia.mediaLibrary.
- You have granted the permissions **ohos.permission.READ_MEDIA** and **ohos.permission.WRITE_MEDIA**.
- You have imported the module [@ohos.file.fs](../reference/apis/js-apis-file-fs.md) in addition to @ohos.multimedia.mediaLibrary.
- The **testFile.txt** file has been created and contains content.
**How to Develop**
1. Call [context.filesDir](../reference/apis/js-apis-inner-app-context.md#contextgetfilesdir) to obtain the directory of the application sandbox.
1. Call [context.filesDir](../reference/apis/js-apis-file-fs.md) to obtain the directory of the application sandbox.
2. Call **MediaLibrary.getFileAssets** and **FetchFileResult.getFirstObject** to obtain the first file in the result set of the public directory.
3. Call **fileio.open** to open the file in the sandbox.
3. Call **fs.open** to open the file in the sandbox.
4. Call **fileAsset.open** to open the file in the public directory.
5. Call **fileio.copyfile** to copy the file.
6. Call **fileAsset.close** and **fileio.close** to close the file.
5. Call [fs.copyfile](../reference/apis/js-apis-file-fs.md#fscopyfile) to copy the file.
6. Call **fileAsset.close** and [fs.close](../reference/apis/js-apis-file-fs.md#fsclose) to close the file.
**Example 1: Copying Files from the Public Directory to the Sandbox**
```ts
async function copyPublic2Sandbox() {
try {
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
let sandboxDirPath = globalThis.context.filesDir;
let sandboxDirPath = context.filesDir;
let fileKeyObj = mediaLibrary.FileKey;
let fileAssetFetchOp = {
selections: fileKeyObj.DISPLAY_NAME + '= ?',
selectionArgs: ['testFile.txt'],
selections: fileKeyObj.DISPLAY_NAME + '= ?',
selectionArgs: ['testFile.txt'],
};
let fetchResult = await media.getFileAssets(fileAssetFetchOp);
let fileAsset = await fetchResult.getFirstObject();
let fdPub = await fileAsset.open('rw');
let fdSand = await fileio.open(sandboxDirPath + '/testFile.txt', 0o2 | 0o100, 0o666);
await fileio.copyFile(fdPub, fdSand);
let fdSand = await fs.open(sandboxDirPath + '/testFile.txt', fs.OpenMode.READ_WRITE | fs.OpenMode.CREATE);
await fs.copyFile(fdPub, fdSand.fd);
await fileAsset.close(fdPub);
await fileio.close(fdSand);
await fs.close(fdSand.fd);
let content_sand = await fileio.readText(sandboxDirPath + '/testFile.txt');
console.log('content read from sandbox file: ', content_sand)
let content_sand = await fs.readText(sandboxDirPath + '/testFile.txt');
console.info('content read from sandbox file: ', content_sand)
} catch (err) {
console.info('[demo] copyPublic2Sandbox fail, err: ', err);
}
}
```
......@@ -107,81 +112,81 @@ async function copyPublic2Sandbox() {
```ts
async function copySandbox2Public() {
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
let sandboxDirPath = globalThis.context.filesDir;
let DIR_DOCUMENTS = mediaLibrary.DirectoryType.DIR_DOCUMENTS;
const publicDirPath = await media.getPublicDirectory(DIR_DOCUMENTS);
try {
let fileAsset = await media.createAsset(mediaLibrary.MediaType.FILE, 'testFile02.txt', publicDirPath);
console.info('createFile successfully, message = ' + fileAsset);
} catch (err) {
console.info('createFile failed, message = ' + err);
}
try {
let fileKeyObj = mediaLibrary.FileKey;
let fileAssetFetchOp = {
selections: fileKeyObj.DISPLAY_NAME + '= ?',
selectionArgs: ['testFile02.txt'],
};
let fetchResult = await media.getFileAssets(fileAssetFetchOp);
var fileAsset = await fetchResult.getFirstObject();
} catch (err) {
console.info('file asset get failed, message = ' + err);
}
let fdPub = await fileAsset.open('rw');
let fdSand = await fileio.open(sandboxDirPath + 'testFile.txt', 0o2);
await fileio.copyFile(fdSand, fdPub);
await fileio.close(fdPub);
await fileio.close(fdSand);
let fdPubRead = await fileAsset.open('rw');
try {
let arrayBuffer = new ArrayBuffer(4096);
await fileio.read(fdPubRead, arrayBuffer);
var content_pub = String.fromCharCode(...new Uint8Array(arrayBuffer));
fileAsset.close(fdPubRead);
} catch (err) {
console.log('read text failed, message = ', err);
}
console.log('content read from public file: ', content_pub);
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
let sandboxDirPath = context.filesDir;
let DIR_DOCUMENTS = mediaLibrary.DirectoryType.DIR_DOCUMENTS;
const publicDirPath = await media.getPublicDirectory(DIR_DOCUMENTS);
try {
let fileAsset = await media.createAsset(mediaLibrary.MediaType.FILE, 'testFile02.txt', publicDirPath);
console.info('createFile successfully, message = ' + fileAsset);
} catch (err) {
console.error('createFile failed, message = ' + err);
}
try {
let fileKeyObj = mediaLibrary.FileKey;
let fileAssetFetchOp = {
selections: fileKeyObj.DISPLAY_NAME + '= ?',
selectionArgs: ['testFile02.txt'],
};
let fetchResult = await media.getFileAssets(fileAssetFetchOp);
var fileAsset = await fetchResult.getFirstObject();
} catch (err) {
console.error('file asset get failed, message = ' + err);
}
let fdPub = await fileAsset.open('rw');
let fdSand = await fs.open(sandboxDirPath + 'testFile.txt', OpenMode.READ_WRITE);
await fs.copyFile(fdSand.fd, fdPub);
await fileAsset.close(fdPub);
await fs.close(fdSand.fd);
let fdPubRead = await fileAsset.open('rw');
try {
let arrayBuffer = new ArrayBuffer(4096);
await fs.read(fdPubRead, arrayBuffer);
var content_pub = String.fromCharCode(...new Uint8Array(arrayBuffer));
fileAsset.close(fdPubRead);
} catch (err) {
console.error('read text failed, message = ', err);
}
console.info('content read from public file: ', content_pub);
}
```
### Reading and Writing a File
You can use **FileAsset.open** and **FileAsset.close** of [mediaLibrary](../reference/apis/js-apis-medialibrary.md) to open and close a file, and use **fileio.read** and **fileio.write** of [fileio](../reference/apis/js-apis-fileio.md) to read and write a file.
You can use **FileAsset.open** and **FileAsset.close** of [mediaLibrary](../reference/apis/js-apis-medialibrary.md) to open and close a file, and use **fs.read** and **fs.write** in [file.fs](../reference/apis/js-apis-file-fs.md) to read and write the file.
**Prerequisites**
- You have obtained a **MediaLibrary** instance.
- You have granted the permission **ohos.permission.WRITE_MEDIA**.
- You have imported the module [@ohos.fileio](../reference/apis/js-apis-fileio.md) in addition to @ohos.multimedia.mediaLibrary.
- You have granted the permissions **ohos.permission.READ_MEDIA** and **ohos.permission.WRITE_MEDIA**.
- You have imported the module [@ohos.file.fs](../reference/apis/js-apis-file-fs.md) in addition to @ohos.multimedia.mediaLibrary.
**How to Develop**
1. Create a file.
```ts
async function example() {
let mediaType = mediaLibrary.MediaType.FILE;
let DIR_DOCUMENTS = mediaLibrary.DirectoryType.DIR_DOCUMENTS;
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const path = await media.getPublicDirectory(DIR_DOCUMENTS);
media.createAsset(mediaType, "testFile.text", path).then (function (asset) {
console.info("createAsset successfully:" + JSON.stringify(asset));
}).catch(function(err){
console.info("createAsset failed with error: " + err);
});
}
```
```ts
async function example() {
let mediaType = mediaLibrary.MediaType.FILE;
let DIR_DOCUMENTS = mediaLibrary.DirectoryType.DIR_DOCUMENTS;
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const path = await media.getPublicDirectory(DIR_DOCUMENTS);
media.createAsset(mediaType, "testFile.text", path).then((asset) => {
console.info("createAsset successfully:" + JSON.stringify(asset));
}).catch((err) => {
console.error("createAsset failed with error: " + err);
});
}
```
2. Call **FileAsset.open** to open the file.
3. Call **fileio.write** to write a string to the file.
3. Call [fs.write](../reference/apis/js-apis-file-fs.md#fswrite) to write a string to the file.
4. Call **fileio.read** to read the file and save the data read in an array buffer.
4. Call [fs.read](../reference/apis/js-apis-file-fs.md#fsread) to read the file and save the data read in an array buffer.
5. Convert the array buffer to a string.
......@@ -191,25 +196,25 @@ You can use **FileAsset.open** and **FileAsset.close** of [mediaLibrary](../refe
```ts
async function writeOnlyPromise() {
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
let fileKeyObj = mediaLibrary.FileKey;
let fileAssetFetchOp = {
selections: fileKeyObj.DISPLAY_NAME + '= ?',
selectionArgs: ['testFile.txt'],
};
let fetchResult = await media.getFileAssets(fileAssetFetchOp);
let fileAsset = await fetchResult.getFirstObject();
console.info('fileAssetName: ', fileAsset.displayName);
try {
let fd = await fileAsset.open('w');
console.info('file descriptor: ', fd);
await fileio.write(fd, "Write file test content.");
await fileAsset.close(fd);
} catch (err) {
console.info('write file failed, message = ', err);
}
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
let fileKeyObj = mediaLibrary.FileKey;
let fileAssetFetchOp = {
selections: fileKeyObj.DISPLAY_NAME + '= ?',
selectionArgs: ['testFile.txt'],
};
let fetchResult = await media.getFileAssets(fileAssetFetchOp);
let fileAsset = await fetchResult.getFirstObject();
console.info('fileAssetName: ', fileAsset.displayName);
try {
let fd = await fileAsset.open('w');
console.info('file descriptor: ', fd);
await fs.write(fd, "Write file test content.");
await fileAsset.close(fd);
} catch (err) {
console.error('write file failed, message = ', err);
}
}
```
......@@ -217,28 +222,28 @@ async function writeOnlyPromise() {
```ts
async function readOnlyPromise() {
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
let fileKeyObj = mediaLibrary.FileKey;
let fileAssetFetchOp = {
selections: fileKeyObj.DISPLAY_NAME + '= ?' ,
selectionArgs: ['testFile.txt'],
};
let fetchResult = await media.getFileAssets(fileAssetFetchOp);
let fileAsset = await fetchResult.getFirstObject();
console.info('fileAssetName: ', fileAsset.displayName);
try {
let fd = await fileAsset.open('r');
let arrayBuffer = new ArrayBuffer(4096);
await fileio.read(fd, arrayBuffer);
let fileContent = String.fromCharCode(...new Uint8Array(arrayBuffer));
globalThis.fileContent = fileContent;
globalThis.fileName = fileAsset.displayName;
console.info('file content: ', fileContent);
await fileAsset.close(fd);
} catch (err) {
console.info('read file failed, message = ', err);
}
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
let fileKeyObj = mediaLibrary.FileKey;
let fileAssetFetchOp = {
selections: fileKeyObj.DISPLAY_NAME + '= ?' ,
selectionArgs: ['testFile.txt'],
};
let fetchResult = await media.getFileAssets(fileAssetFetchOp);
let fileAsset = await fetchResult.getFirstObject();
console.info('fileAssetName: ', fileAsset.displayName);
try {
let fd = await fileAsset.open('r');
let arrayBuffer = new ArrayBuffer(4096);
await fs.read(fd, arrayBuffer);
let fileContent = String.fromCharCode(...new Uint8Array(arrayBuffer));
globalThis.fileContent = fileContent;
globalThis.fileName = fileAsset.displayName;
console.info('file content: ', fileContent);
await fileAsset.close(fd);
} catch (err) {
console.error('read file failed, message = ', err);
}
}
```
......@@ -64,64 +64,64 @@ After configuring the permissions in the **module.json5** file, the application
1. Declare the permissions in the **module.json5** file. Add the **requestPermissions** tag under **module** in the file, and set the tag based on the project requirements. For details about the tag, see [Guide for Requesting Permissions from User](../security/accesstoken-guidelines.md).
```json
{
"module": {
"requestPermissions": [
{
"name": "ohos.permission.MEDIA_LOCATION",
"reason": "$string:reason",
"usedScene": {
"abilities": [
"MainAbility"
],
"when": "always"
}
},
{
"name": "ohos.permission.READ_MEDIA",
"reason": "$string:reason",
"usedScene": {
"abilities": [
"MainAbility"
],
"when": "always"
}
},
{
"name": "ohos.permission.WRITE_MEDIA",
"reason": "$string:reason",
"usedScene": {
"abilities": [
"MainAbility"
],
"when": "always"
}
}
]
}
}
```
```json
{
"module": {
"requestPermissions": [
{
"name": "ohos.permission.MEDIA_LOCATION",
"reason": "$string:reason",
"usedScene": {
"abilities": [
"EntryAbility"
],
"when": "always"
}
},
{
"name": "ohos.permission.READ_MEDIA",
"reason": "$string:reason",
"usedScene": {
"abilities": [
"EntryAbility"
],
"when": "always"
}
},
{
"name": "ohos.permission.WRITE_MEDIA",
"reason": "$string:reason",
"usedScene": {
"abilities": [
"EntryAbility"
],
"when": "always"
}
}
]
}
}
```
2. In the **Ability.ts** file, call **requestPermissionsFromUser** in the **onWindowStageCreate** callback to check for the required permissions and if they are not granted, request the permissions from the user by displaying a dialog box.
```ts
import UIAbility from '@ohos.app.ability.UIAbility';
import abilityAccessCtrl, {Permissions} from '@ohos.abilityAccessCtrl';
export default class MainAbility extends Ability {
onWindowStageCreate(windowStage) {
let list : Array<Permissions> = ['ohos.permission.READ_MEDIA', 'ohos.permission.WRITE_MEDIA'];
let permissionRequestResult;
let atManager = abilityAccessCtrl.createAtManager();
atManager.requestPermissionsFromUser(this.context, list, (err, result) => {
if (err) {
console.log('requestPermissionsFromUserError: ' + JSON.stringify(err));
} else {
permissionRequestResult=result;
console.log('permissionRequestResult: ' + JSON.stringify(permissionRequestResult));
}
});
}
}
```
```ts
import UIAbility from '@ohos.app.ability.UIAbility';
import abilityAccessCtrl, {Permissions} from '@ohos.abilityAccessCtrl';
export default class EntryAbility extends UIAbility {
onWindowStageCreate(windowStage) {
let list : Array<Permissions> = ['ohos.permission.READ_MEDIA', 'ohos.permission.WRITE_MEDIA'];
let permissionRequestResult;
let atManager = abilityAccessCtrl.createAtManager();
atManager.requestPermissionsFromUser(this.context, list, (err, result) => {
if (err) {
console.error('requestPermissionsFromUserError: ' + JSON.stringify(err));
} else {
permissionRequestResult = result;
console.info('permissionRequestResult: ' + JSON.stringify(permissionRequestResult));
}
});
}
}
```
......@@ -33,30 +33,33 @@ To specify the image as the media type, set **selectionArgs** to **MediaType.IMA
```ts
async function example() {
let fileKeyObj = mediaLibrary.FileKey;
let fileType = mediaLibrary.MediaType.IMAGE;
let option = {
selections: fileKeyObj.MEDIA_TYPE + '= ?',
selectionArgs: [fileType.toString()],
};
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const fetchFileResult = await media.getFileAssets(option);
for (let i = 0; i < fetchFileResult.getCount(); i++) {
fetchFileResult.getNextObject((err, fileAsset) => {
if (err) {
console.error('Failed ');
return;
}
console.log('fileAsset.displayName ' + i + ': ' + fileAsset.displayName);
})
}
let fileKeyObj = mediaLibrary.FileKey;
let fileType = mediaLibrary.MediaType.IMAGE;
let option = {
selections: fileKeyObj.MEDIA_TYPE + '= ?',
selectionArgs: [fileType.toString()],
};
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const fetchFileResult = await media.getFileAssets(option);
fetchFileResult.getFirstObject().then((fileAsset) => {
console.log('getFirstObject.displayName : ' + fileAsset.displayName);
for (let i = 1; i < fetchFileResult.getCount(); i++) {
fetchFileResult.getNextObject().then((fileAsset) => {
console.info('fileAsset.displayName ' + i + ': ' + fileAsset.displayName);
}).catch((err) => {
console.error('Failed to get next object: ' + err);
});
}
}).catch((err) => {
console.error('Failed to get first object: ' + err);
});
}
```
### Querying Media Assets with the Specified Date
The following describes how to obtain media assets that are added on the specified date. You can also use the modification date and shooting date as the retrieval conditions.
The following describes how to obtain all the media assets that are added from the specified date. You can also use the modification date and shooting date as the retrieval conditions.
To specify the date when the files are added as the retrieval condition, set **selections** to **FileKey.DATE_ADDED**.
......@@ -64,23 +67,26 @@ To specify the date 2022-8-5, set **selectionArgs** to **2022-8-5**.
```ts
async function example() {
let fileKeyObj = mediaLibrary.FileKey;
let option = {
selections: fileKeyObj.DATE_ADDED + '= ?',
selectionArgs: ['2022-8-5'],
};
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const fetchFileResult = await media.getFileAssets(option);
for (let i = 0; i < fetchFileResult.getCount(); i++) {
fetchFileResult.getNextObject((err, fileAsset) => {
if (err) {
console.error('Failed ');
return;
}
console.log('fileAsset.displayName ' + i + ': ' + fileAsset.displayName);
})
}
let fileKeyObj = mediaLibrary.FileKey;
let option = {
selections: fileKeyObj.DATE_ADDED + '> ?',
selectionArgs: ['2022-8-5'],
};
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const fetchFileResult = await media.getFileAssets(option);
fetchFileResult.getFirstObject().then((fileAsset) => {
console.info('getFirstObject.displayName : ' + fileAsset.displayName);
for (let i = 1; i < fetchFileResult.getCount(); i++) {
fetchFileResult.getNextObject().then((fileAsset) => {
console.info('fileAsset.displayName ' + i + ': ' + fileAsset.displayName);
}).catch((err) => {
console.error('Failed to get next object: ' + err);
});
}
}).catch((err) => {
console.error('Failed to get first object: ' + err);
});
}
```
......@@ -92,25 +98,28 @@ To sort files in descending order by the date when they are added, set **order**
```ts
async function example() {
let fileKeyObj = mediaLibrary.FileKey;
let fileType = mediaLibrary.MediaType.IMAGE;
let option = {
selections: fileKeyObj.MEDIA_TYPE + '= ?',
selectionArgs: [fileType.toString()],
order: fileKeyObj.DATE_ADDED + " DESC",
};
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const fetchFileResult = await media.getFileAssets(option);
for (let i = 0; i < fetchFileResult.getCount(); i++) {
fetchFileResult.getNextObject((err, fileAsset) => {
if (err) {
console.error('Failed ');
return;
}
console.log('fileAsset.displayName ' + i + ': ' + fileAsset.displayName);
})
}
let fileKeyObj = mediaLibrary.FileKey;
let fileType = mediaLibrary.MediaType.IMAGE;
let option = {
selections: fileKeyObj.MEDIA_TYPE + '= ?',
selectionArgs: [fileType.toString()],
order: fileKeyObj.DATE_ADDED + " DESC",
};
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const fetchFileResult = await media.getFileAssets(option);
fetchFileResult.getFirstObject().then((fileAsset) => {
console.info('getFirstObject.displayName : ' + fileAsset.displayName);
for (let i = 1; i < fetchFileResult.getCount(); i++) {
fetchFileResult.getNextObject().then((fileAsset) => {
console.info('fileAsset.displayName ' + i + ': ' + fileAsset.displayName);
}).catch((err) => {
console.error('Failed to get next object: ' + err);
});
}
}).catch((err) => {
console.error('Failed to get first object: ' + err);
});
}
```
......@@ -124,31 +133,29 @@ To specify the album name **'myAlbum'**, set **selectionArgs** to **'myAlbum'**.
```ts
async function example() {
let fileKeyObj = mediaLibrary.FileKey;
let fileType = mediaLibrary.MediaType.IMAGE;
let option = {
selections: fileKeyObj.ALBUM_NAME + '= ?',
selectionArgs: ['myAlbum'],
};
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const fetchFileResult = await media.getFileAssets(option);
for (let i = 0; i < fetchFileResult.getCount(); i++) {
fetchFileResult.getNextObject((err, fileAsset) => {
if (err) {
console.error('Failed ');
return;
}
console.log('fileAsset.displayName ' + i + ': ' + fileAsset.displayName);
})
}
let fileKeyObj = mediaLibrary.FileKey;
let option = {
selections: fileKeyObj.ALBUM_NAME + '= ?',
selectionArgs: ['myAlbum'],
};
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const fetchFileResult = await media.getFileAssets(option);
if (albumList.length > 0) {
fetchFileResult.getFirstObject().then((album) => {
console.info('getFirstObject.displayName : ' + album.albumName);
}).catch((err) => {
console.error('Failed to get first object: ' + err);
});
} else {
console.info('getAlbum list is: 0');
}
}
```
## Obtaining Images and Videos in an Album
You can obtain media assets in an album in either of the following ways:
- Call [MediaLibrary.getFileAssets](../reference/apis/js-apis-medialibrary.md#getfileassets7-1) with an album specified, as described in [Querying Media Assets with the Specfied Album Name](#querying-media-assets-with-the-specified-album-name).
- Call [Album.getFileAssets](../reference/apis/js-apis-medialibrary.md#getfileassets7-3) to obtain an **Album** instance, so as to obtain the media assets in it.
......@@ -163,24 +170,24 @@ The following describes how to obtain videos in an album named **New Album 1**.
1. Create a retrieval condition for obtaining the target **Album** instance.
```ts
let fileKeyObj = mediaLibrary.FileKey;
let AlbumNoArgsFetchOp = {
selections: fileKeyObj.ALBUM_NAME + '= ?',
selectionArgs:['New Album 1']
}
```
```ts
let fileKeyObj = mediaLibrary.FileKey;
let AlbumNoArgsFetchOp = {
selections: fileKeyObj.ALBUM_NAME + '= ?',
selectionArgs:['New Album 1']
}
```
2. Create a retrieval condition for obtaining videos in the target album.
```ts
let fileKeyObj = mediaLibrary.FileKey;
let imageType = mediaLibrary.MediaType.VIDEO;
let imagesFetchOp = {
selections: fileKeyObj.MEDIA_TYPE + '= ?',
selectionArgs: [imageType.toString()],
}
```
```ts
let fileKeyObj = mediaLibrary.FileKey;
let videoType = mediaLibrary.MediaType.VIDEO;
let videoFetchOp = {
selections: fileKeyObj.MEDIA_TYPE + '= ?',
selectionArgs: [videoType.toString()],
}
```
3. Call **Album.getFileAssets** to obtain the videos in the target album.
......@@ -188,28 +195,28 @@ Complete sample code:
```ts
async function getCameraImagePromise() {
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
let fileKeyObj = mediaLibrary.FileKey;
let imageType = mediaLibrary.MediaType.IMAGE;
let imagesFetchOp = {
selections: fileKeyObj.MEDIA_TYPE + '= ?',
selectionArgs: [imageType.toString()],
}
let AlbumNoArgsFetchOp = {
selections: fileKeyObj.ALBUM_NAME + '= ?',
selectionArgs:['New Album 1']
}
let albumList = await media.getAlbums(AlbumNoArgsFetchOp);
if (albumList.length > 0) {
const album = albumList[0];
let fetchFileResult = await album.getFileAssets(imagesFetchOp);
let count = fetchFileResult.getCount();
console.info("get mediaLibrary IMAGE number", count);
} else {
console.info('getAlbum list is: 0');
}
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
let fileKeyObj = mediaLibrary.FileKey;
let videoType = mediaLibrary.MediaType.VIDEO;
let videoFetchOp = {
selections: fileKeyObj.MEDIA_TYPE + '= ?',
selectionArgs: [videoType.toString()],
}
let AlbumNoArgsFetchOp = {
selections: fileKeyObj.ALBUM_NAME + '= ?',
selectionArgs:['New Album 1']
}
let albumList = await media.getAlbums(AlbumNoArgsFetchOp);
if (albumList.length > 0) {
const album = albumList[0];
let fetchFileResult = await album.getFileAssets(videoFetchOp);
let count = fetchFileResult.getCount();
console.info("get mediaLibrary VIDEO number", count);
} else {
console.info('getAlbum list is: 0');
}
}
```
......@@ -235,31 +242,32 @@ The following describes how to obtain the thumbnail (size: 720 x 720) of the fir
```ts
async function getFirstThumbnailPromise() {
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
let fileKeyObj = mediaLibrary.FileKey;
let imageType = mediaLibrary.MediaType.IMAGE;
let imagesFetchOp = {
selections: fileKeyObj.MEDIA_TYPE + '= ?',
selectionArgs: [imageType.toString()],
}
let size = { width: 720, height: 720 };
const fetchFileResult = await media.getFileAssets(imagesFetchOp);
if (fetchFileResult != undefined) {
const asset = await fetchFileResult.getFirstObject();
asset.getThumbnail(size).then((pixelMap) => {
pixelMap.getImageInfo().then((info) => {
console.info('get Thumbnail info: ' + "width: " + info.size.width + " height: " + info.size.height);
}).catch((err) => {
console.info("getImageInfo failed with error:" + err);
});
}).catch((err) => {
console.info("getImageInfo failed with error:" + err);
});
} else {
console.info("get image failed with error");
}
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
let fileKeyObj = mediaLibrary.FileKey;
let imageType = mediaLibrary.MediaType.IMAGE;
let imagesFetchOp = {
selections: fileKeyObj.MEDIA_TYPE + '= ?',
selectionArgs: [imageType.toString()],
}
let size = { width: 720, height: 720 };
const fetchFileResult = await media.getFileAssets(imagesFetchOp);
if (fetchFileResult === undefined) {
console.error("get image failed with error");
return;
} else {
const asset = await fetchFileResult.getFirstObject();
asset.getThumbnail(size).then((pixelMap) => {
pixelMap.getImageInfo().then((info) => {
console.info('get Thumbnail info: ' + "width: " + info.size.width + " height: " + info.size.height);
}).catch((err) => {
console.error("getImageInfo failed with error: " + err);
});
}).catch((err) => {
console.error("getImageInfo failed with error: " + err);
});
}
}
```
......@@ -277,16 +285,16 @@ The following describes how to create a file of the **MediaType.FILE** type.
```ts
async function example() {
let mediaType = mediaLibrary.MediaType.FILE;
let DIR_DOCUMENTS = mediaLibrary.DirectoryType.DIR_DOCUMENTS;
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const path = await media.getPublicDirectory(DIR_DOCUMENTS);
media.createAsset(mediaType, "testFile.text", path).then ((asset) => {
console.info("createAsset successfully:"+ JSON.stringify(asset));
}).catch((err) => {
console.info("createAsset failed with error:"+ err);
});
let mediaType = mediaLibrary.MediaType.FILE;
let DIR_DOCUMENTS = mediaLibrary.DirectoryType.DIR_DOCUMENTS;
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const path = await media.getPublicDirectory(DIR_DOCUMENTS);
media.createAsset(mediaType, "testFile.text", path).then((asset) => {
console.info("createAsset successfully:"+ JSON.stringify(asset));
}).catch((err) => {
console.error("createAsset failed with error: " + err);
});
}
```
......@@ -312,26 +320,26 @@ The following describes how to move the first file in the result set to the recy
```ts
async function example() {
let fileKeyObj = mediaLibrary.FileKey;
let fileType = mediaLibrary.MediaType.FILE;
let option = {
selections: fileKeyObj.MEDIA_TYPE + '= ?',
selectionArgs: [fileType.toString()],
};
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const fetchFileResult = await media.getFileAssets(option);
let asset = await fetchFileResult.getFirstObject();
if (asset == undefined) {
console.error('asset not exist');
return;
}
// Void callback.
asset.trash(true).then(() => {
console.info("trash successfully");
}).catch((err) => {
console.info("trash failed with error: " + err);
});
let fileKeyObj = mediaLibrary.FileKey;
let fileType = mediaLibrary.MediaType.FILE;
let option = {
selections: fileKeyObj.MEDIA_TYPE + '= ?',
selectionArgs: [fileType.toString()],
};
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const fetchFileResult = await media.getFileAssets(option);
let asset = await fetchFileResult.getFirstObject();
if (asset === undefined) {
console.error('asset not exist');
return;
}
// Void callback.
asset.trash(true).then(() => {
console.info("trash successfully");
}).catch((err) => {
console.error("trash failed with error: " + err);
});
}
```
......@@ -346,7 +354,7 @@ Before renaming a file, you must obtain the file, for example, by calling [Fetch
- You have obtained a **MediaLibrary** instance.
- You have granted the permission **ohos.permission.WRITE_MEDIA**.
The following describes how to rename the first file in the result set as **newtitle.text**.
The following describes how to rename the first file in the result set as **newImage.jpg**.
**How to Develop**
......@@ -358,28 +366,28 @@ The following describes how to rename the first file in the result set as **newt
```ts
async function example() {
let fileKeyObj = mediaLibrary.FileKey;
let fileType = mediaLibrary.MediaType.FILE;
let option = {
selections: fileKeyObj.MEDIA_TYPE + '= ?',
selectionArgs: [fileType.toString()],
};
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const fetchFileResult = await media.getFileAssets(option);
let asset = await fetchFileResult.getFirstObject();
if (asset == undefined) {
console.error('asset not exist');
let fileKeyObj = mediaLibrary.FileKey;
let fileType = mediaLibrary.MediaType.IMAGE;
let option = {
selections: fileKeyObj.MEDIA_TYPE + '= ?',
selectionArgs: [fileType.toString()],
};
const context = getContext(this);
let media = mediaLibrary.getMediaLibrary(context);
const fetchFileResult = await media.getFileAssets(option);
let asset = await fetchFileResult.getFirstObject();
if (asset === undefined) {
console.error('asset not exist');
return;
}
asset.displayName = 'newImage.jpg';
// Void callback.
asset.commitModify((err) => {
if (err) {
console.error('fileRename Failed ');
return;
}
asset.displayName = 'newImage.jpg';
// Void callback.
asset.commitModify((err) => {
if (err) {
console.error('fileRename Failed ');
return;
}
console.log('fileRename successful.');
});
console.info('fileRename successful.');
});
}
```
# Media
- Audio
- Audio and Video
- [Audio Overview](audio-overview.md)
- [Audio Playback Development](audio-playback.md)
- [Audio Recording Development](audio-recorder.md)
- [Audio Rendering Development](audio-renderer.md)
- [Audio Stream Management Development](audio-stream-manager.md)
- [Audio Capture Development](audio-capturer.md)
......@@ -12,10 +10,12 @@
- [Audio Interruption Mode Development](audio-interruptmode.md)
- [Volume Management Development](audio-volume-manager.md)
- [Audio Routing and Device Management Development](audio-routing-manager.md)
- Video
- [Video Playback Development](video-playback.md)
- [Video Recording Development](video-recorder.md)
- [AVPlayer Development (Recommended)](avplayer-playback.md)
- [AVRecorder Development (Recommended)](avrecorder.md)
- [Audio Playback Development (To Be Deprecated)](audio-playback.md)
- [Audio Recording Development (To Be Deprecated)](audio-recorder.md)
- [Video Playback Development (To Be Deprecated)](video-playback.md)
- [Video Recording Development (To Be Deprecated)](video-recorder.md)
- AVSession
- [AVSession Overview](avsession-overview.md)
......
......@@ -21,7 +21,7 @@ This following figure shows the audio capturer state transitions.
## Constraints
Before developing the audio data collection feature, configure the **ohos.permission.MICROPHONE** permission for your application. For details about permission configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
Before developing the audio data collection feature, configure the **ohos.permission.MICROPHONE** permission for your application. For details, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop
......@@ -72,7 +72,7 @@ For details about the APIs, see [AudioCapturer in Audio Management](../reference
}
await audioCapturer.start();
let state = audioCapturer.state;
state = audioCapturer.state;
if (state == audio.AudioState.STATE_RUNNING) {
console.info('AudioRecLog: Capturer started');
} else {
......@@ -86,7 +86,7 @@ For details about the APIs, see [AudioCapturer in Audio Management](../reference
The following example shows how to write recorded data into a file.
```js
import fileio from '@ohos.fileio';
import fs from '@ohos.file.fs';
let state = audioCapturer.state;
// The read operation can be performed only when the state is STATE_RUNNING.
......@@ -96,31 +96,36 @@ For details about the APIs, see [AudioCapturer in Audio Management](../reference
}
const path = '/data/data/.pulse_dir/capture_js.wav'; // Path for storing the collected audio file.
let fd = fileio.openSync(path, 0o102, 0o777);
if (fd !== null) {
console.info('AudioRecLog: file fd created');
}
else{
console.info('AudioRecLog: file fd create : FAILED');
let file = fs.openSync(filePath, 0o2);
let fd = file.fd;
if (file !== null) {
console.info('AudioRecLog: file created');
} else {
console.info('AudioRecLog: file create : FAILED');
return;
}
fd = fileio.openSync(path, 0o2002, 0o666);
if (fd !== null) {
console.info('AudioRecLog: file fd opened in append mode');
}
let numBuffersToCapture = 150; // Write data for 150 times.
let count = 0;
while (numBuffersToCapture) {
let bufferSize = await audioCapturer.getBufferSize();
let buffer = await audioCapturer.read(bufferSize, true);
let options = {
offset: count * this.bufferSize,
length: this.bufferSize
}
if (typeof(buffer) == undefined) {
console.info('AudioRecLog: read buffer failed');
} else {
let number = fileio.writeSync(fd, buffer);
let number = fs.writeSync(fd, buffer, options);
console.info(`AudioRecLog: data written: ${number}`);
}
}
numBuffersToCapture--;
count++;
}
```
......@@ -178,18 +183,18 @@ For details about the APIs, see [AudioCapturer in Audio Management](../reference
// Obtain the audio capturer information.
let audioCapturerInfo : audio.AuduioCapturerInfo = await audioCapturer.getCapturerInfo();
// Obtain the audio stream information.
let audioStreamInfo : audio.AudioStreamInfo = await audioCapturer.getStreamInfo();
// Obtain the audio stream ID.
let audioStreamId : number = await audioCapturer.getAudioStreamId();
// Obtain the Unix timestamp, in nanoseconds.
let audioTime : number = await audioCapturer.getAudioTime();
// Obtain a proper minimum buffer size.
let bufferSize : number = await audioCapturer.getBuffersize();
let bufferSize : number = await audioCapturer.getBufferSize();
```
7. (Optional) Use **on('markReach')** to subscribe to the mark reached event, and use **off('markReach')** to unsubscribe from the event.
......
......@@ -38,7 +38,7 @@ For details about the **src** types supported by **AudioPlayer**, see the [src a
```js
import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio'
import fs from '@ohos.file.fs'
// Print the stream track information.
function printfDescription(obj) {
......@@ -112,14 +112,8 @@ async function audioPlayerDemo() {
let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath);
}, (err) => {
console.info('open fd failed err is' + err);
}).catch((err) => {
console.info('open fd failed err is' + err);
});
let file = await fs.open(path);
fdPath = fdPath + '' + file.fd;
audioPlayer.src = fdPath; // Set the src attribute and trigger the 'dataLoad' event callback.
}
```
......@@ -128,7 +122,7 @@ async function audioPlayerDemo() {
```js
import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio'
import fs from '@ohos.file.fs'
export class AudioDemo {
// Set the player callbacks.
......@@ -154,14 +148,8 @@ export class AudioDemo {
let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath);
}, (err) => {
console.info('open fd failed err is' + err);
}).catch((err) => {
console.info('open fd failed err is' + err);
});
let file = await fs.open(path);
fdPath = fdPath + '' + file.fd;
audioPlayer.src = fdPath; // Set the src attribute and trigger the 'dataLoad' event callback.
}
}
......@@ -171,7 +159,7 @@ export class AudioDemo {
```js
import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio'
import fs from '@ohos.file.fs'
export class AudioDemo {
// Set the player callbacks.
......@@ -202,14 +190,8 @@ export class AudioDemo {
let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\02.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let nextpath = pathDir + '/02.mp3'
await fileIO.open(nextpath).then((fdNumber) => {
nextFdPath = nextFdPath + '' + fdNumber;
console.info('open fd success fd is' + nextFdPath);
}, (err) => {
console.info('open fd failed err is' + err);
}).catch((err) => {
console.info('open fd failed err is' + err);
});
let nextFile = await fs.open(nextpath);
nextFdPath = nextFdPath + '' + nextFile.fd;
audioPlayer.src = nextFdPath; // Set the src attribute and trigger the 'dataLoad' event callback.
}
......@@ -220,14 +202,8 @@ export class AudioDemo {
let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath);
}, (err) => {
console.info('open fd failed err is' + err);
}).catch((err) => {
console.info('open fd failed err is' + err);
});
let file = await fs.open(path);
fdPath = fdPath + '' + file.fd;
audioPlayer.src = fdPath; // Set the src attribute and trigger the 'dataLoad' event callback.
}
}
......@@ -237,7 +213,7 @@ export class AudioDemo {
```js
import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio'
import fs from '@ohos.file.fs'
export class AudioDemo {
// Set the player callbacks.
......@@ -259,14 +235,8 @@ export class AudioDemo {
let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath);
}, (err) => {
console.info('open fd failed err is' + err);
}).catch((err) => {
console.info('open fd failed err is' + err);
});
let file = await fs.open(path);
fdPath = fdPath + '' + file.fd;
audioPlayer.src = fdPath; // Set the src attribute and trigger the 'dataLoad' event callback.
}
}
......
......@@ -33,33 +33,32 @@ The following figure shows the audio renderer state transitions.
For details about the APIs, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8).
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.
Set parameters of the **AudioRenderer** instance in **audioRendererOptions**. This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.
Set parameters of the **AudioRenderer** instance in **audioRendererOptions**. This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.
```js
import audio from '@ohos.multimedia.audio';
import audio from '@ohos.multimedia.audio';
let audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
}
let audioRendererInfo = {
content: audio.ContentType.CONTENT_TYPE_SPEECH,
usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION,
rendererFlags: 0 // 0 is the extended flag bit of the audio renderer. The default value is 0.
let audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
}
let audioRendererInfo = {
content: audio.ContentType.CONTENT_TYPE_SPEECH,
usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION,
rendererFlags: 0 // 0 is the extended flag bit of the audio renderer. The default value is 0.
}
let audioRendererOptions = {
streamInfo: audioStreamInfo,
rendererInfo: audioRendererInfo
}
let audioRendererOptions = {
streamInfo: audioStreamInfo,
rendererInfo: audioRendererInfo
}
let audioRenderer = await audio.createAudioRenderer(audioRendererOptions);
console.log("Create audio renderer success.");
let audioRenderer = await audio.createAudioRenderer(audioRendererOptions);
console.log("Create audio renderer success.");
```
2. Use **start()** to start audio rendering.
```js
......@@ -82,15 +81,15 @@ Set parameters of the **AudioRenderer** instance in **audioRendererOptions**. Th
}
}
```
The renderer state will be **STATE_RUNNING** once the audio renderer is started. The application can then begin reading buffers.
The renderer state will be **STATE_RUNNING** once the audio renderer is started. The application can then begin reading buffers.
3. Call **write()** to write data to the buffer.
Read the audio data to be played to the buffer. Call **write()** repeatedly to write the data to the buffer.
```js
import fileio from '@ohos.fileio';
import fs from '@ohos.file.fs';
import audio from '@ohos.multimedia.audio';
async function writeBuffer(buf) {
......@@ -109,35 +108,33 @@ Set parameters of the **AudioRenderer** instance in **audioRendererOptions**. Th
// Set a proper buffer size for the audio renderer. You can also select a buffer of another size.
const bufferSize = await audioRenderer.getBufferSize();
let dir = globalThis.fileDir; // You must use the sandbox path.
const path = dir + '/file_example_WAV_2MG.wav'; // The file to render is in the following path: /data/storage/el2/base/haps/entry/files/file_example_WAV_2MG.wav
console.info(`file path: ${ path}`);
let ss = fileio.createStreamSync(path, 'r');
const totalSize = fileio.statSync(path).size; // Size of the file to render.
let discardHeader = new ArrayBuffer(bufferSize);
ss.readSync(discardHeader);
let rlen = 0;
rlen += bufferSize;
let id = setInterval(() => {
if (audioRenderer.state == audio.AudioState.STATE_RELEASED) { // The rendering stops if the audio renderer is in the STATE_RELEASED state.
ss.closeSync();
await audioRenderer.stop();
clearInterval(id);
const filePath = dir + '/file_example_WAV_2MG.wav'; // The file to render is in the following path: /data/storage/el2/base/haps/entry/files/file_example_WAV_2MG.wav
console.info(`file filePath: ${ filePath}`);
let file = fs.openSync(filePath, fs.OpenMode.READ_ONLY);
let stat = await fs.stat(filePath); // Music file information.
let buf = new ArrayBuffer(bufferSize);
let len = stat.size % this.bufferSize == 0 ? Math.floor(stat.size / this.bufferSize) : Math.floor(stat.size / this.bufferSize + 1);
for (let i = 0;i < len; i++) {
let options = {
offset: i * this.bufferSize,
length: this.bufferSize
}
if (audioRenderer.state == audio.AudioState.STATE_RUNNING) {
if (rlen >= totalSize) { // The rendering stops if the file finishes reading.
ss.closeSync();
await audioRenderer.stop();
clearInterval(id);
}
let buf = new ArrayBuffer(bufferSize);
rlen += ss.readSync(buf);
console.info(`Total bytes read from file: ${rlen}`);
writeBuffer(buf);
} else {
console.info('check after next interval');
}
}, 30); // The timer interval is set based on the audio format. The unit is millisecond.
let readsize = await fs.read(file.fd, buf, options)
let writeSize = await new Promise((resolve,reject)=>{
this.audioRenderer.write(buf,(err,writeSize)=>{
if(err){
reject(err)
}else{
resolve(writeSize)
}
})
})
}
fs.close(file)
await audioRenderer.stop(); // Stop rendering.
await audioRenderer.release(); // Releases the resources.
```
4. (Optional) Call **pause()** or **stop()** to pause or stop rendering.
......@@ -242,7 +239,7 @@ Set parameters of the **AudioRenderer** instance in **audioRendererOptions**. Th
let audioTime : number = await audioRenderer.getAudioTime();
// Obtain a proper minimum buffer size.
let bufferSize : number = await audioRenderer.getBuffersize();
let bufferSize : number = await audioRenderer.getBufferSize();
// Obtain the audio renderer rate.
let renderRate : audio.AudioRendererRate = await audioRenderer.getRenderRate();
......@@ -424,35 +421,31 @@ Set parameters of the **AudioRenderer** instance in **audioRendererOptions**. Th
let dir = globalThis.fileDir; // You must use the sandbox path.
const path1 = dir + '/music001_48000_32_1.wav'; // The file to render is in the following path: /data/storage/el2/base/haps/entry/files/music001_48000_32_1.wav
console.info(`audioRender1 file path: ${ path1}`);
let ss1 = await fileio.createStream(path1,'r');
const totalSize1 = fileio.statSync(path1).size; // Size of the file to render.
console.info(`totalSize1 -------: ${totalSize1}`);
let discardHeader = new ArrayBuffer(bufferSize);
ss1.readSync(discardHeader);
let rlen = 0;
rlen += bufferSize;
let file1 = fs.openSync(path1, fs.OpenMode.READ_ONLY);
let stat = await fs.stat(path1); // Music file information.
let buf = new ArrayBuffer(bufferSize);
let len = stat.size % this.bufferSize == 0 ? Math.floor(stat.size / this.bufferSize) : Math.floor(stat.size / this.bufferSize + 1);
// 1.7 Render the original audio data in the buffer by using audioRender.
let id = setInterval(async () => {
if (audioRenderer1.state == audio.AudioState.STATE_RELEASED) { // The rendering stops if the audio renderer is in the STATE_RELEASED state.
ss1.closeSync();
audioRenderer1.stop();
clearInterval(id);
for (let i = 0;i < len; i++) {
let options = {
offset: i * this.bufferSize,
length: this.bufferSize
}
if (audioRenderer1.state == audio.AudioState.STATE_RUNNING) {
if (rlen >= totalSize1) { // The rendering stops if the file finishes reading.
ss1.closeSync();
await audioRenderer1.stop();
clearInterval(id);
}
let buf = new ArrayBuffer(bufferSize);
rlen += ss1.readSync(buf);
console.info(`Total bytes read from file: ${rlen}`);
await writeBuffer(buf, that.audioRenderer1);
} else {
console.info('check after next interval');
}
}, 30); // The timer interval is set based on the audio format. The unit is millisecond.
let readsize = await fs.read(file.fd, buf, options)
let writeSize = await new Promise((resolve,reject)=>{
this.audioRenderer1.write(buf,(err,writeSize)=>{
if(err){
reject(err)
}else{
resolve(writeSize)
}
})
})
}
fs.close(file1)
await audioRenderer1.stop(); // Stop rendering.
await audioRenderer1.release(); Releases the resources.
}
async runningAudioRender2(){
......@@ -499,36 +492,32 @@ Set parameters of the **AudioRenderer** instance in **audioRendererOptions**. Th
// 2.6 Read the original audio data file.
let dir = globalThis.fileDir; // You must use the sandbox path.
const path2 = dir + '/music002_48000_32_1.wav'; // The file to render is in the following path: /data/storage/el2/base/haps/entry/files/music002_48000_32_1.wav
console.error(`audioRender1 file path: ${ path2}`);
let ss2 = await fileio.createStream(path2,'r');
const totalSize2 = fileio.statSync(path2).size; // Size of the file to render.
console.error(`totalSize2 -------: ${totalSize2}`);
let discardHeader2 = new ArrayBuffer(bufferSize);
ss2.readSync(discardHeader2);
let rlen = 0;
rlen += bufferSize;
console.info(`audioRender2 file path: ${ path2}`);
let file2 = fs.openSync(path2, fs.OpenMode.READ_ONLY);
let stat = await fs.stat(path2); // Music file information.
let buf = new ArrayBuffer(bufferSize);
let len = stat.size % this.bufferSize == 0 ? Math.floor(stat.size / this.bufferSize) : Math.floor(stat.size / this.bufferSize + 1);
// 2.7 Render the original audio data in the buffer by using audioRender.
let id = setInterval(async () => {
if (audioRenderer2.state == audio.AudioState.STATE_RELEASED) { // The rendering stops if the audio renderer is in the STATE_RELEASED state.
ss2.closeSync();
that.audioRenderer2.stop();
clearInterval(id);
for (let i = 0;i < len; i++) {
let options = {
offset: i * this.bufferSize,
length: this.bufferSize
}
if (audioRenderer1.state == audio.AudioState.STATE_RUNNING) {
if (rlen >= totalSize2) { // The rendering stops if the file finishes reading.
ss2.closeSync();
await audioRenderer2.stop();
clearInterval(id);
}
let buf = new ArrayBuffer(bufferSize);
rlen += ss2.readSync(buf);
console.info(`Total bytes read from file: ${rlen}`);
await writeBuffer(buf, that.audioRenderer2);
} else {
console.info('check after next interval');
}
}, 30); // The timer interval is set based on the audio format. The unit is millisecond.
let readsize = await fs.read(file.fd, buf, options)
let writeSize = await new Promise((resolve,reject)=>{
this.audioRenderer2.write(buf,(err,writeSize)=>{
if(err){
reject(err)
}else{
resolve(writeSize)
}
})
})
}
fs.close(file2)
await audioRenderer2.stop(); // Stop rendering.
await audioRenderer2.release(); // Releases the resources.
}
async writeBuffer(buf, audioRender) {
......@@ -548,4 +537,4 @@ Set parameters of the **AudioRenderer** instance in **audioRendererOptions**. Th
await runningAudioRender2();
}
```
```
\ No newline at end of file
......@@ -104,7 +104,7 @@ The full playback process includes creating an instance, setting resources, sett
```js
import media from '@ohos.multimedia.media'
import audio from '@ohos.multimedia.audio';
import fileIO from '@ohos.fileio'
import fs from '@ohos.file.fs'
const TAG = 'AVPlayerDemo:'
export class AVPlayerDemo {
......@@ -223,14 +223,8 @@ export class AVPlayerDemo {
let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_AAC.mp4 /data/app/el2/100/base/ohos.acts.multimedia.media.avplayer/haps/entry/files" command.
let path = pathDir + '/H264_AAC.mp4'
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber
console.info('open fd success fd is' + fdPath)
}, (err) => {
console.info('open fd failed err is' + err)
}).catch((err) => {
console.info('open fd failed err is' + err)
});
let file = await fs.open(path)
fdPath = fdPath + '' + file.fd
this.avPlayer.url = fdPath
}
}
......@@ -240,7 +234,7 @@ export class AVPlayerDemo {
```js
import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio'
import fs from '@ohos.file.fs'
const TAG = 'AVPlayerDemo:'
export class AVPlayerDemo {
......@@ -280,7 +274,7 @@ export class AVPlayerDemo {
break;
case 'stopped': // This state is reported upon a successful callback of stop().
console.info(TAG + 'state stopped called')
this.avPlayer.reset() // Call reset() to initialize the AVPlayer state.
this.avPlayer.release() // Call reset() to initialize the AVPlayer state.
break;
case 'released':
console.info(TAG + 'state released called')
......@@ -302,24 +296,18 @@ export class AVPlayerDemo {
let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_AAC.mp4 /data/app/el2/100/base/ohos.acts.multimedia.media.avplayer/haps/entry/files" command.
let path = pathDir + '/H264_AAC.mp4'
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber
console.info('open fd success fd is' + fdPath)
}, (err) => {
console.info('open fd failed err is' + err)
}).catch((err) => {
console.info('open fd failed err is' + err)
});
let file = await fs.open(path)
fdPath = fdPath + '' + file.fd
this.avPlayer.url = fdPath
}
}
```
### Switching to the Next Video Clip
### Looping a Song
```js
import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio'
import fs from '@ohos.file.fs'
const TAG = 'AVPlayerDemo:'
export class AVPlayerDemo {
......@@ -362,7 +350,7 @@ export class AVPlayerDemo {
break;
case 'stopped': // This state is reported upon a successful callback of stop().
console.info(TAG + 'state stopped called')
this.avPlayer.reset() // Call reset() to initialize the AVPlayer state.
this.avPlayer.release() // Call reset() to initialize the AVPlayer state.
break;
case 'released':
console.info(TAG + 'state released called')
......@@ -393,23 +381,17 @@ export class AVPlayerDemo {
let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_AAC.mp4 /data/app/el2/100/base/ohos.acts.multimedia.media.avplayer/haps/entry/files" command.
let path = pathDir + '/H264_AAC.mp4'
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber
console.info('open fd success fd is' + fdPath)
}, (err) => {
console.info('open fd failed err is' + err)
}).catch((err) => {
console.info('open fd failed err is' + err)
});
let file = await fs.open(path)
fdPath = fdPath + '' + file.fd
this.avPlayer.url = fdPath
}
}
```
### Looping a Song
### Switching to the Next Video Clip
```js
import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio'
import fs from '@ohos.file.fs'
const TAG = 'AVPlayerDemo:'
export class AVPlayerDemo {
......@@ -422,14 +404,8 @@ export class AVPlayerDemo {
let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_MP3.mp4 /data/app/el2/100/base/ohos.acts.multimedia.media.avplayer/haps/entry/files" command.
let path = pathDir + '/H264_MP3.mp4'
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber
console.info('open fd success fd is' + fdPath)
}, (err) => {
console.info('open fd failed err is' + err)
}).catch((err) => {
console.info('open fd failed err is' + err)
});
let file = await fs.open(path)
fdPath = fdPath + '' + file.fd
this.avPlayer.url = fdPath // The initialized state is reported again.
}
......@@ -493,14 +469,8 @@ export class AVPlayerDemo {
let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_AAC.mp4 /data/app/el2/100/base/ohos.acts.multimedia.media.avplayer/haps/entry/files" command.
let path = pathDir + '/H264_AAC.mp4'
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber
console.info('open fd success fd is' + fdPath)
}, (err) => {
console.info('open fd failed err is' + err)
}).catch((err) => {
console.info('open fd failed err is' + err)
});
let file = await fs.open(path)
fdPath = fdPath + '' + file.fd
this.avPlayer.url = fdPath
}
}
......
......@@ -67,14 +67,14 @@ export class AVRecorderDemo {
let surfaceID; // The surface ID is obtained by calling getInputSurface and transferred to the videoOutput object of the camera.
await this.getFd('01.mp4');
// Configure the parameters related to audio and video recording.
// Configure the parameters related to audio and video recording based on those supported by the hardware device.
let avProfile = {
audioBitrate : 48000,
audioChannels : 2,
audioCodec : media.CodecMimeType.AUDIO_AAC,
audioSampleRate : 48000,
fileFormat : media.ContainerFormatType.CFT_MPEG_4,
videoBitrate : 48000,
videoBitrate : 2000000,
videoCodec : media.CodecMimeType.VIDEO_MPEG4,
videoFrameWidth : 640,
videoFrameHeight : 480,
......@@ -363,10 +363,10 @@ export class VideoRecorderDemo {
let surfaceID; // The surface ID is obtained by calling getInputSurface and transferred to the videoOutput object of the camera.
await this.getFd('01.mp4');
// Configure the parameters related to video recording.
// Configure the parameters related to pure video recording based on those supported by the hardware device.
let videoProfile = {
fileFormat : media.ContainerFormatType.CFT_MPEG_4,
videoBitrate : 48000,
videoBitrate : 2000000,
videoCodec : media.CodecMimeType.VIDEO_MPEG4,
videoFrameWidth : 640,
videoFrameHeight : 480,
......
......@@ -51,7 +51,7 @@ For details about how to create an XComponent, see [XComponent](../reference/ark
```js
import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio'
import fs from '@ohos.file.fs'
export class VideoPlayerDemo {
// Report an error in the case of a function invocation failure.
failureCallback(error) {
......@@ -82,14 +82,8 @@ export class VideoPlayerDemo {
let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_AAC.mp4 /data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile" command.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile/H264_AAC.mp4';
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath);
}, (err) => {
console.info('open fd failed err is' + err);
}).catch((err) => {
console.info('open fd failed err is' + err);
});
let file = await fs.open(path);
fdPath = fdPath + '' + file.fd;
// Call createVideoPlayer to create a VideoPlayer instance.
await media.createVideoPlayer().then((video) => {
if (typeof (video) != 'undefined') {
......@@ -180,7 +174,7 @@ export class VideoPlayerDemo {
```js
import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio'
import fs from '@ohos.file.fs'
export class VideoPlayerDemo {
// Report an error in the case of a function invocation failure.
failureCallback(error) {
......@@ -211,14 +205,8 @@ export class VideoPlayerDemo {
let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_AAC.mp4 /data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile" command.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile/H264_AAC.mp4';
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath);
}, (err) => {
console.info('open fd failed err is' + err);
}).catch((err) => {
console.info('open fd failed err is' + err);
});
let file = await fs.open(path);
fdPath = fdPath + '' + file.fd;
// Call createVideoPlayer to create a VideoPlayer instance.
await media.createVideoPlayer().then((video) => {
if (typeof (video) != 'undefined') {
......@@ -267,7 +255,7 @@ export class VideoPlayerDemo {
```js
import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio'
import fs from '@ohos.file.fs'
export class VideoPlayerDemo {
// Report an error in the case of a function invocation failure.
failureCallback(error) {
......@@ -299,14 +287,8 @@ export class VideoPlayerDemo {
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_AAC.mp4 /data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile" command.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile/H264_AAC.mp4';
let nextPath = '/data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile/MP4_AAC.mp4';
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath);
}, (err) => {
console.info('open fd failed err is' + err);
}).catch((err) => {
console.info('open fd failed err is' + err);
});
let file = await fs.open(path);
fdPath = fdPath + '' + file.fd;
// Call createVideoPlayer to create a VideoPlayer instance.
await media.createVideoPlayer().then((video) => {
if (typeof (video) != 'undefined') {
......@@ -341,14 +323,8 @@ export class VideoPlayerDemo {
// Obtain the next video FD address.
fdPath = 'fd://'
await fileIO.open(nextPath).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath);
}, (err) => {
console.info('open fd failed err is' + err);
}).catch((err) => {
console.info('open fd failed err is' + err);
});
let nextFile = await fs.open(nextPath);
fdPath = fdPath + '' + nextFile.fd;
// Set the second video playback source.
videoPlayer.url = fdPath;
......@@ -378,7 +354,7 @@ export class VideoPlayerDemo {
```js
import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio'
import fs from '@ohos.file.fs'
export class VideoPlayerDemo {
// Report an error in the case of a function invocation failure.
failureCallback(error) {
......@@ -409,14 +385,8 @@ export class VideoPlayerDemo {
let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_AAC.mp4 /data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile" command.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile/H264_AAC.mp4';
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath);
}, (err) => {
console.info('open fd failed err is' + err);
}).catch((err) => {
console.info('open fd failed err is' + err);
});
let file = await fs.open(path);
fdPath = fdPath + '' + file.fd;
// Call createVideoPlayer to create a VideoPlayer instance.
await media.createVideoPlayer().then((video) => {
if (typeof (video) != 'undefined') {
......
......@@ -76,14 +76,14 @@ export class VideoRecorderDemo {
let surfaceID = null; // Used to save the surface ID returned by getInputSurface.
// Obtain the FD address of the video to be recorded.
await this.getFd('01.mp4');
// Recording-related parameter settings
// Configure the parameters related to video recording based on those supported by the hardware device.
let videoProfile = {
audioBitrate : 48000,
audioChannels : 2,
audioCodec : 'audio/mp4a-latm',
audioSampleRate : 48000,
fileFormat : 'mp4',
videoBitrate : 48000,
videoBitrate : 2000000,
videoCodec : 'video/mp4v-es',
videoFrameWidth : 640,
videoFrameHeight : 480,
......
......@@ -4,9 +4,7 @@ OpenHarmony applications use JavaScript (JS) when calling native APIs. The nativ
## How to Develop
The DevEco Studio has a default project that uses NAPIs.
You can choose **File** > **New** > **Create Project** to create a **Native C++** project. The **cpp** directory is generated in the **main** directory. You can use the NAPIs provided by the **ace_napi** repository for development.
The DevEco Studio has a default project that uses NAPIs. You can choose **File** > **New** > **Create Project** to create a **Native C++** project. The **cpp** directory is generated in the **main** directory. You can use the NAPIs provided by the **ace_napi** repository for development.
You can import the native .so that contains the JS processing logic. For example, **import hello from 'libhello.so'** to use the **libhello.so** capability. Then, the JS object created using the NAPI can be passed to the **hello** object of the application to call the native capability.
......@@ -19,7 +17,10 @@ You can import the native .so that contains the JS processing logic. For example
### .so Naming Rules
Each module has a .so file. For example, if the module name is **hello**, name the .so file **libhello.so**. The **nm_modname** field in **napi_module** must be **hello**, which is the same as the module name. The sample code for importing the .so file is **import hello from 'libhello.so'**.
The .so file names must comply with the following rules:
* Each module has a .so file.
* The **nm_modname** field in **napi_module** must be the same as the module name. For example, if the module name is **hello**, name the .so file **libhello.so**. The sample code for importing the .so file is **import hello from 'libhello.so'**.
### JS Objects and Threads
......
......@@ -3,25 +3,27 @@
DevEco Studio allows you to develop and build multiple HAP files in one application project, as shown below.
**Figure 1** Multi-HAP build view
**Figure 1** Multi-HAP build view
![hap-multi-view](figures/hap-multi-view.png)
1. Development view in DevEco Studio
- AppScope folder
- [app.json5](app-configuration-file.md): application-wide configuration, such as the application bundle name, version number, application icon, application name, and dependent SDK version number.
- **AppScope** folder
- **[app.json5](app-configuration-file.md)**: stores application-wide configuration, such as the application bundle name, version number, application icon, application name, and dependent SDK version number.
- **resources** folder: stores application icon resources and application name string resources.
**NOTE**
- The folder is automatically generated by DevEco Studio and its name cannot be changed.
- The file names in the **AppScope** folder cannot be the same as those in the entry- or feature-type module directories. Otherwise, DevEco Studio reports an error.
- Entry- or feature-type module directories (the names are customizable)
- You implement service logic of your application in these module directories. In this example, the module folders are **entry.hap** and **feature.hap**.
- **resources** directory: stores the resources used by the module.
**NOTE**
- The folder is automatically generated by DevEco Studio and its name cannot be changed.
- The file names in the **AppScope** folder cannot be the same as those in the entry- or feature-type module folder. Otherwise, an error will be reported.
- **entry** or **feature** folder (whose name is customizable)
- A module folder created by the developer by following the creation wizard of DevEco Studio. It stores the service logic implementation of the application. Multiple module folders can be created. In the preceding figure, **entry** and **feature** are two created module folders.
- **resources** folder: stores the resources used by the module.
- **ets** folder: stores the service logic.
- [module.json5](module-configuration-file.md): module configuration, such as the module name, entry code path of the module, and component information.
- **[module.json5](module-configuration-file.md)**: stores module configuration, such as the module name, entry code path of the module, and component information.
2. View after build and packaging
- After a module is built, a HAP file for deployment is generated. Each module corresponds to a HAP file.
- The **module.json** file in the HAP file is composed of the **app.json5** and **module.json5** files in the development view.
......
......@@ -6,32 +6,40 @@ Below is the process of developing, debugging, releasing, and deploying multiple
![hap-release](figures/hap-release.png)
## Development
You can use [DevEco Studio](https://developer.harmonyos.com/en/develop/deveco-studio) to create multiple modules based on service requirements and develop services in independent modules.
You can use [DevEco Studio](https://developer.harmonyos.com/en/develop/deveco-studio) to create multiple modules as needed and develop services in respective modules.
## Debugging
You can use DevEco Studio to build code into one or more HAP files. Then, you can debug the HAP files.
After building code into one or more HAP files and installing or updating these HAP files, you can debug them by using the methods:
* Using DevEco Studio for debugging
Follow the instructions in [Debugging Configuration](https://developer.harmonyos.com/en/docs/documentation/doc-guides/ohos-debugging-and-running-0000001263040487#section10491183521520).
* Using [hdc_std](../../device-dev/subsystems/subsys-toolchain-hdc-guide.md) for debugging
* Using [hdc](../../device-dev/subsystems/subsys-toolchain-hdc-guide.md) (which can be obtained in the **toolchains** directory of the OpenHarmony SDK) for debugging
Before debugging HAP files, install or update them using either of the methods:
1. Use hdc to install and update the HAP files.
When specifying the HAP files, use the paths of the files on the operating system, for example, Windows.
You can obtain the hdc_std tool from the **toolchains** directory of the SDK. When using this tool to install an HAP file, the HAP file path is the one on the operating platform. In this example, the Windows operating platform is used. The command reference is as follows:
```
// Installation and update: Multiple file paths can be specified.
hdc_std install C:\entry.hap C:\feature.hap
hdc install C:\entry.hap C:\feature.hap
// The execution result is as follows:
install bundle successfully.
// Uninstall
hdc_std uninstall com.example.myapplication
hdc uninstall com.example.myapplication
// The execution result is as follows:
uninstall bundle successfully.
```
2. Run the hdc shell command, and then use the Bundle Manager (bm) tool to install and update the HAP files.
* Using [Bundle Manager (bm)](../../application-dev/tools/bm-tool.md) for debugging
When using bm to install or update an HAP file, the HAP file path is the one on the real device. The command reference is as follows:
When specifying the HAP files, use the paths of the files on the real device. The sample code is as follows:
```
// Run the hdc shell command before using the bm tool.
hdc shell
// Installation and update: Multiple file paths can be specified.
bm install -p /data/app/entry.hap /data/app/feature.hap
// The execution result is as follows:
......@@ -41,6 +49,8 @@ You can use DevEco Studio to build code into one or more HAP files. Then, you ca
// The execution result is as follows:
uninstall bundle successfully.
```
After the HAP files are installed or updated, you can debug them by following the instructions in [Ability Assistant](https://docs.openharmony.cn/pages/v3.2Beta/en/application-dev/tools/aa-tool.md/).
## Release
When your application package meets the release requirements, you can package and build it into an App Pack and release it to the application market on the cloud. The application market verifies the signature of the App Pack. If the signature verification is successful, the application market obtains the HAP files from the App Pack, signs them, and distributes the signed HAP files.
......
# Multi-HAP Usage Rules
- The App Pack cannot be directly installed on the device. It is only a unit that is released to AppGallery.
- The App Pack cannot be directly installed on a device. It is only used to be released to the application market.
- All HAP files in the App Pack must share the same **bundleName** value in the configuration files.
- All HAP files in the App Pack must share the same **versionCode** value in the configuration files.
- In an application, each type of device supports only one HAP of the entry type. Each application can contain zero, one, or more HAP files of the feature type.
- In an App Pack, each type of device supports only one HAP file of the entry type and zero, one, or more HAP files of the feature type.
- Each HAP file in the App Pack must have **moduleName** configured. The **moduleName** value corresponding to all HAP files of the same device type must be unique.
- Each HAP file in the App Pack must have **moduleName** configured. Among HAP files of the same device type, the **moduleName** value must be unique.
- The signing certificates of all HAP files in the same application must be the same. Applications are released to the application market in the form of App Pack after being signed. Before distribution, the application market splits an App Pack into HAP files and resigns them to ensure the consistency of all HAP file signing certificates. Before installing HAP files on a device through the CLI or DevEco Studio for debugging, you must ensure that their signing certificates are the same. Otherwise, the installation will fail.
- The signing certificates of all HAP files in the same application must be the same. Applications are released to the application market in the form of App Pack after being signed. Before distribution, the application market splits an App Pack into HAP files and resigns them to ensure the consistency of HAP file signing certificates. Before installing HAP files on a device through the CLI or DevEco Studio for debugging, ensure that their signing certificates are the same. Otherwise, the installation will fail.
......@@ -35,7 +35,7 @@ resources
| Category | base Subdirectory | Qualifiers Subdirectory | rawfile Subdirectory |
| ---- | ---------------------------------------- | ---------------------------------------- | ---------------------------------------- |
| Structure| The **base** subdirectory is a default directory. If no qualifiers subdirectories in the **resources** directory of the application match the device status, the resource file in the **base** subdirectory will be automatically referenced.<br>Resource group subdirectories are located at the second level of subdirectories to store basic elements such as strings, colors, and boolean values, as well as resource files such as media, animations, and layouts. For details, see [Resource Group Subdirectories](#resource-group-subdirectories).| You need to create qualifiers subdirectories on your own. Each directory name consists of one or more qualifiers that represent the application scenarios or device characteristics. For details, see [Qualifiers Subdirectories](#qualifiers-subdirectories).<br>Resource group subdirectories are located at the second level of subdirectories to store basic elements such as strings, colors, and boolean values, as well as resource files such as media, animations, and layouts. For details, see [Resource Group Subdirectories](#resource-group-subdirectories). | You can create multiple levels of subdirectories with custom directory names. They can be used to store various resource files.<br>However, resource files in the **rawfile** subdirectory will not be matched based on the device status.|
| Structure| The **base** subdirectory is a default directory. If no qualifiers subdirectories in the **resources** directory of the application match the device status, the resource file in the **base** subdirectory will be automatically referenced.<br>Resource group subdirectories are located at the second level of subdirectories to store basic elements such as strings, colors, and boolean values, as well as resource files such as media, animations, and layouts. For details, see [Resource Group Subdirectories](#resource-group-subdirectories).| You need to create qualifiers subdirectories on your own. Each directory name consists of one or more qualifiers that represent the application scenarios or device characteristics. For details, see [Qualifiers Subdirectories](#qualifiers-subdirectories).<br>Resource group subdirectories are located at the second level of subdirectories to store basic elements such as strings, colors, and boolean values, as well as resource files such as media, animations, and layouts. For details, see [Resource Group Subdirectories](#resource-group-subdirectories).| You can create multiple levels of subdirectories with custom directory names. They can be used to store various resource files.<br>However, resource files in the **rawfile** subdirectory will not be matched based on the device status.|
| Compilation| Resource files in the subdirectory are compiled into binary files, and each resource file is assigned an ID. | Resource files in the subdirectory are compiled into binary files, and each resource file is assigned an ID. | Resource files in the subdirectory are directly packed into the application without being compiled, and no IDs will be assigned to the resource files. |
| Reference| Resource files in the subdirectory are referenced based on the resource type and resource name. | Resource files in the subdirectory are referenced based on the resource type and resource name. | Resource files in the subdirectory are referenced based on the file path and file name. |
......@@ -81,9 +81,9 @@ You can create resource group subdirectories (including element, media, and prof
| Resource Group Subdirectory | Description | Resource File |
| ------- | ---------------------------------------- | ---------------------------------------- |
| element | Indicates element resources. Each type of data is represented by a JSON file. The options are as follows:<br>- **boolean**: boolean data<br>- **color**: color data<br>- **float**: floating-point data<br>- **intarray**: array of integers<br>- **integer**: integer data<br>- **pattern**: pattern data<br>- **plural**: plural form data<br>- **strarray**: array of strings<br>- **string**: string data| It is recommended that files in the **element** subdirectory be named the same as the following files, each of which can contain only data of the same type:<br>- boolean.json<br>- color.json<br>- float.json<br>- intarray.json<br>- integer.json<br>- pattern.json<br>- plural.json<br>- strarray.json<br>- string.json |
| media | Indicates media resources, including non-text files such as images, audios, and videos. | The file name can be customized, for example, **icon.png**. |
| profile | Indicates a user-defined configuration file. You can obtain the file content by using the [getProfileByAbility](../reference/apis/js-apis-bundleManager.md#bundlemanagergetprofilebyability) API. | The file name can be customized, for example, **test_profile.json**. |
| element | Indicates element resources. Each type of data is represented by a JSON file. (Only files are supported in this directory.) The options are as follows:<br>- **boolean**: boolean data<br>- **color**: color data<br>- **float**: floating-point data<br>- **intarray**: array of integers<br>- **integer**: integer data<br>- **pattern**: pattern data<br>- **plural**: plural form data<br>- **strarray**: array of strings<br>- **string**: string data| It is recommended that files in the **element** subdirectory be named the same as the following files, each of which can contain only data of the same type:<br>- boolean.json<br>- color.json<br>- float.json<br>- intarray.json<br>- integer.json<br>- pattern.json<br>- plural.json<br>- strarray.json<br>- string.json |
| media | Indicates media resources, including non-text files such as images, audios, and videos. (Only files are supported in this directory.) | The file name can be customized, for example, **icon.png**. |
| profile | Indicates a custom configuration file. You can obtain the file content by using the [getProfileByAbility](../reference/apis/js-apis-bundleManager.md#bundlemanagergetprofilebyability) API. (Only files are supported in this directory.) | The file name can be customized, for example, **test_profile.json**. |
| rawfile | Indicates other types of files, which are stored in their raw formats after the application is built as an HAP file. They will not be integrated into the **resources.index** file.| The file name can be customized. |
**Media Resource Types**
......@@ -229,7 +229,7 @@ When referencing resources in the **rawfile** subdirectory, use the **"$rawfile(
>
> Resource descriptors accept only strings, such as **'app.type.name'**, and cannot be combined.
>
> The return value of **$r** is a **Resource** object. You can obtain the corresponding string by using the [getStringValue](../reference/apis/js-apis-resource-manager.md) API.
> The return value of **$r** is a **Resource** object. You can obtain the corresponding string by using the [getStringValue](../reference/apis/js-apis-resource-manager.md#getstringvalue9) API.
In the **.ets** file, you can use the resources defined in the **resources** directory. The following is a resource usage example based on the resource file examples in [Resource Group Sub-directories](#resource-group-subdirectories):
......@@ -252,7 +252,6 @@ Text($r('app.string.message_arrive', "five'o clock"))
Text($r('app.plural.eat_apple', 5, 5))
.fontColor($r('app.color.color_world'))
.fontSize($r('app.float.font_world'))
}
Image($r('app.media.my_background_image')) // Reference media resources.
......
......@@ -19,13 +19,9 @@
- [@ohos.application.DataShareExtensionAbility (DataShare Extension Ability)](js-apis-application-dataShareExtensionAbility.md)
- [@ohos.application.StaticSubscriberExtensionAbility (StaticSubscriberExtensionAbility)](js-apis-application-staticSubscriberExtensionAbility.md)
- Stage Model (To Be Deprecated Soon)
- [@ohos.application.Ability (Ability)](js-apis-application-ability.md)
- [@ohos.application.AbilityConstant (AbilityConstant)](js-apis-application-abilityConstant.md)
- [@ohos.application.AbilityLifecycleCallback (AbilityLifecycleCallback)](js-apis-application-abilityLifecycleCallback.md)
- [@ohos.application.AbilityStage (AbilityStage)](js-apis-application-abilityStage.md)
- [@ohos.application.context (Context)](js-apis-application-context.md)
- [@ohos.application.EnvironmentCallback (EnvironmentCallback)](js-apis-application-environmentCallback.md)
- [@ohos.application.ExtensionAbility (ExtensionAbility)](js-apis-application-extensionAbility.md)
- [@ohos.application.FormExtension (FormExtension)](js-apis-application-formExtension.md)
- [@ohos.application.ServiceExtensionAbility (ServiceExtensionAbility)](js-apis-application-serviceExtensionAbility.md)
- [@ohos.application.StartOptions (StartOptions)](js-apis-application-startOptions.md)
......@@ -59,7 +55,6 @@
- [@ohos.application.appManager (appManager)](js-apis-application-appManager.md)
- [@ohos.application.Configuration (Configuration)](js-apis-application-configuration.md)
- [@ohos.application.ConfigurationConstant (ConfigurationConstant)](js-apis-application-configurationConstant.md)
- [@ohos.application.errorManager (ErrorManager)](js-apis-application-errorManager.md)
- [@ohos.application.formBindingData (formBindingData)](js-apis-application-formBindingData.md)
- [@ohos.application.formError (FormError)](js-apis-application-formError.md)
- [@ohos.application.formHost (FormHost)](js-apis-application-formHost.md)
......@@ -82,7 +77,6 @@
- [context](js-apis-inner-app-context.md)
- [processInfo](js-apis-inner-app-processInfo.md)
- application
- [AbilityContext](js-apis-ability-context.md)
- [abilityDelegator](js-apis-inner-application-abilityDelegator.md)
- [abilityDelegatorArgs](js-apis-inner-application-abilityDelegatorArgs.md)
- [abilityMonitor](js-apis-inner-application-abilityMonitor.md)
......@@ -146,6 +140,7 @@
- [abilityInfo](js-apis-bundleManager-abilityInfo.md)
- [applicationInfo](js-apis-bundleManager-applicationInfo.md)
- [bundleInfo](js-apis-bundleManager-bundleInfo.md)
- [BundlePackInfo](js-apis-bundleManager-BundlePackInfo.md)
- [dispatchInfo](js-apis-bundleManager-dispatchInfo.md)
- [elementName](js-apis-bundleManager-elementName.md)
- [extensionAbilityInfo](js-apis-bundleManager-extensionAbilityInfo.md)
......@@ -222,10 +217,11 @@
- [@ohos.file.hash (File Hash Processing)](js-apis-file-hash.md)
- [@ohos.file.securityLabel (Data Label)](js-apis-file-securityLabel.md)
- [@ohos.file.statvfs (File System Space Statistics)](js-apis-file-statvfs.md)
- [@ohos.file.storageStatistics (Application Storage Statistics)](js-apis-file-storage-statistics.md)
- [@ohos.file.volumeManager (Volume Management)](js-apis-file-volumemanager.md)
- [@ohos.filemanagement.userFileManager (User Data Management)](js-apis-userFileManager.md)
- [@ohos.multimedia.medialibrary (Media Library Management)](js-apis-medialibrary.md)
- [@ohos.storageStatistics (Application Storage Statistics)](js-apis-storage-statistics.md)
- [@ohos.volumeManager (Volume Management)](js-apis-volumemanager.md)
- Telephony Service
- [@ohos.contact (Contacts)](js-apis-contact.md)
- [@ohos.telephony.call (Call)](js-apis-call.md)
......@@ -277,7 +273,7 @@
- [@ohos.InputMethodSubtype (Input Method Subtype)](js-apis-inputmethod-subtype.md)
- [@ohos.pasteboard (Pasteboard)](js-apis-pasteboard.md)
- [@ohos.screenLock (Screenlock)](js-apis-screen-lock.md)
- [@ohos.systemTime (System Time and Time Zone)](js-apis-system-time.md)
- [@ohos.systemDateTime (System Time and Time Zone)](js-apis-system-date-time.md)
- [@ohos.systemTimer (System Timer)](js-apis-system-timer.md)
- [@ohos.wallpaper (Wallpaper)](js-apis-wallpaper.md)
- [@ohos.web.webview (Webview)](js-apis-webview.md)
......@@ -369,6 +365,7 @@
- [@ohos.reminderAgent (Reminder Agent)](js-apis-reminderAgent.md)
- [@ohos.statfs (statfs)](js-apis-statfs.md)
- [@ohos.systemParameter (System Parameter)](js-apis-system-parameter.md)
- [@ohos.systemTime (System Time and Time Zone)](js-apis-system-time.md)
- [@ohos.usb (USB Management)](js-apis-usb-deprecated.md)
- [@ohos.usbV9 (USB Management)](js-apis-usb.md)
- [@system.app (Application Context)](js-apis-system-app.md)
......
......@@ -156,7 +156,7 @@ bundle.getApplicationInfo(bundleName, bundleFlags, (err, data) => {
> This API is deprecated since API version 9. You are advised to use [bundleManager.getAllBundleInfo](js-apis-bundleManager.md#bundlemanagergetallbundleinfo) instead.
getAllBundleInfo(bundleFlag: BundleFlag, userId?: number): Promise<Array\<BundleInfo>>
getAllBundleInfo(bundleFlag: BundleFlag, userId?: number): Promise\<Array\<BundleInfo\>\>
Obtains the information of all bundles of the specified user. This API uses a promise to return the result.
......@@ -199,7 +199,7 @@ bundle.getAllBundleInfo(bundleFlag, userId)
> This API is deprecated since API version 9. You are advised to use [bundleManager.getAllBundleInfo](js-apis-bundleManager.md#bundlemanagergetallbundleinfo) instead.
getAllBundleInfo(bundleFlag: BundleFlag, callback: AsyncCallback<Array\<BundleInfo>>): void
getAllBundleInfo(bundleFlag: BundleFlag, callback: AsyncCallback\<Array\<BundleInfo\>\>): void
Obtains the information of all bundles of the current user. This API uses an asynchronous callback to return the result.
......@@ -236,7 +236,7 @@ bundle.getAllBundleInfo(bundleFlag, (err, data) => {
> This API is deprecated since API version 9. You are advised to use [bundleManager.getAllBundleInfo](js-apis-bundleManager.md#bundlemanagergetallbundleinfo) instead.
getAllBundleInfo(bundleFlag: BundleFlag, userId: number, callback: AsyncCallback<Array\<BundleInfo>>): void
getAllBundleInfo(bundleFlag: BundleFlag, userId: number, callback: AsyncCallback\<Array\<BundleInfo\>\>): void
Obtains the information of all bundles of the specified user. This API uses an asynchronous callback to return the result.
......@@ -822,7 +822,7 @@ bundle.getPermissionDef(permissionName).then((data) => {
> This API is deprecated since API version 9. You are advised to use [bundleManager.getAllApplicationInfo](js-apis-bundleManager.md#bundlemanagergetallapplicationinfo) instead.
getAllApplicationInfo(bundleFlags: number, userId?: number): Promise<Array\<ApplicationInfo>>
getAllApplicationInfo(bundleFlags: number, userId?: number): Promise\<Array\<ApplicationInfo\>\>
Obtains the information about all applications of the specified user. This API uses a promise to return the result.
......@@ -864,7 +864,7 @@ bundle.getAllApplicationInfo(bundleFlags, userId)
> This API is deprecated since API version 9. You are advised to use [bundleManager.getAllApplicationInfo](js-apis-bundleManager.md#bundlemanagergetallapplicationinfo) instead.
getAllApplicationInfo(bundleFlags: number, userId: number, callback: AsyncCallback<Array\<ApplicationInfo>>): void
getAllApplicationInfo(bundleFlags: number, userId: number, callback: AsyncCallback\<Array\<ApplicationInfo\>\>): void
Obtains the information about all applications. This API uses an asynchronous callback to return the result.
......@@ -1230,7 +1230,7 @@ SystemCapability.BundleManager.BundleFramework
| Name | Type | Mandatory| Description |
| -------- | -------------------------------------------- | ---- | ----------------------- |
| info | [AbilityInfo](js-apis-bundle-AbilityInfo.md) | Yes | Ability information. |
| callback | AsyncCallback\<boolean> | Yes | Callback used to return the result. The value **true** means that the ability is enabled, and **false** means the opposite.|
| callback | AsyncCallback\<boolean> | Yes | Callback used to return the result. If the ability is enabled, **true** will be returned; otherwise, **false** will be returned.|
**Example**
......@@ -1320,7 +1320,7 @@ bundle.isApplicationEnabled(bundleName, (err, data) => {
> This API is deprecated since API version 9. You are advised to use [bundleManager.queryAbilityInfo](js-apis-bundleManager.md#bundlemanagerqueryabilityinfo) instead.
queryAbilityByWant(want: Want, bundleFlags: number, userId?: number): Promise<Array\<AbilityInfo>>
queryAbilityByWant(want: Want, bundleFlags: number, userId?: number): Promise\<Array\<AbilityInfo\>\>
Obtains the ability information based on given Want. This API uses a promise to return the result.
......@@ -1371,7 +1371,7 @@ bundle.queryAbilityByWant(want, bundleFlags, userId)
> This API is deprecated since API version 9. You are advised to use [bundleManager.queryAbilityInfo](js-apis-bundleManager.md#bundlemanagerqueryabilityinfo) instead.
queryAbilityByWant(want: Want, bundleFlags: number, userId: number, callback: AsyncCallback<Array\<AbilityInfo>>): void
queryAbilityByWant(want: Want, bundleFlags: number, userId: number, callback: AsyncCallback\<Array\<AbilityInfo\>\>): void
Obtains the ability information of the specified user based on given Want. This API uses an asynchronous callback to return the result.
......@@ -1416,7 +1416,7 @@ bundle.queryAbilityByWant(want, bundleFlags, userId, (err, data) => {
> This API is deprecated since API version 9. You are advised to use [bundleManager.queryAbilityInfo](js-apis-bundleManager.md#bundlemanagerqueryabilityinfo) instead.
queryAbilityByWant(want: Want, bundleFlags: number, callback: AsyncCallback<Array\<AbilityInfo>>): void;
queryAbilityByWant(want: Want, bundleFlags: number, callback: AsyncCallback\<Array\<AbilityInfo\>\>): void;
Obtains the ability information based on given Want. This API uses an asynchronous callback to return the result.
......@@ -1603,7 +1603,7 @@ bundle.getNameForUid(uid, (err, data) => {
## bundle.getAbilityIcon<sup>8+</sup> <sup>deprecated<sup>
> This API is deprecated since API version 9. You are advised to use [bundleManager.getAbilityIcon](js-apis-bundleManager.md#bundlemanagergetabilityicon) instead.
> This API is deprecated since API version 9. You are advised to use [resourceManager.getMediaContent](js-apis-resource-manager.md#getmediacontent9) instead.
getAbilityIcon(bundleName: string, abilityName: string): Promise\<image.PixelMap>;
......@@ -1646,7 +1646,7 @@ bundle.getAbilityIcon(bundleName, abilityName)
## bundle.getAbilityIcon<sup>8+</sup> <sup>deprecated<sup>
> This API is deprecated since API version 9. You are advised to use [bundleManager.getAbilityIcon](js-apis-bundleManager.md#bundlemanagergetabilityicon) instead.
> This API is deprecated since API version 9. You are advised to use [resourceManager.getMediaContent](js-apis-resource-manager.md#getmediacontent9) instead.
getAbilityIcon(bundleName: string, abilityName: string, callback: AsyncCallback\<image.PixelMap>): void;
......
......@@ -10,7 +10,7 @@ The **Ability** module provides all level-2 module APIs for developers to export
## Modules to Import
```ts
import ability from '@ohos.ability.ability'
import ability from '@ohos.ability.ability';
```
**System capability**: SystemCapability.Ability.AbilityBase
......
......@@ -35,7 +35,7 @@ Obtains the ID attached to the end of a given URI.
**Example**
```ts
let id = dataUriUtils.getId("com.example.dataUriUtils/1221");
let id = dataUriUtils.getId('com.example.dataUriUtils/1221');
```
......@@ -66,9 +66,9 @@ Attaches an ID to the end of a given URI.
```ts
let id = 1122;
let uri = dataUriUtils.attachId(
"com.example.dataUriUtils",
'com.example.dataUriUtils',
id,
)
);
```
......@@ -96,7 +96,7 @@ Deletes the ID from the end of a given URI.
**Example**
```ts
let uri = dataUriUtils.deleteId("com.example.dataUriUtils/1221")
let uri = dataUriUtils.deleteId('com.example.dataUriUtils/1221');
```
......@@ -127,7 +127,7 @@ Updates the ID in a given URI.
```ts
let id = 1122;
let uri = dataUriUtils.updateId(
"com.example.dataUriUtils/1221",
'com.example.dataUriUtils/1221',
id
)
);
```
......@@ -9,7 +9,7 @@ The **ErrorCode** module defines the error codes that may be returned when an ab
## Modules to Import
```ts
import errorCode from '@ohos.ability.errorCode'
import errorCode from '@ohos.ability.errorCode';
```
## ErrorCode
......
......@@ -44,7 +44,7 @@ Enumerates the action constants of the **Want** object. **action** specifies the
| INTENT_PARAMS_INTENT | ability.want.params.INTENT | Action of displaying selection options with an action selector. |
| INTENT_PARAMS_TITLE | ability.want.params.TITLE | Title of the character sequence dialog box used with the action selector. |
| ACTION_FILE_SELECT<sup>7+</sup> | ohos.action.fileSelect | Action of selecting a file. |
| PARAMS_STREAM<sup>7+</sup> | ability.params.stream | URI of the data stream associated with the target when the data is sent. |
| PARAMS_STREAM<sup>7+</sup> | ability.params.stream | URI of the data stream associated with the target when the data is sent. The value must be an array of the string type. |
| ACTION_APP_ACCOUNT_OAUTH <sup>8+</sup> | ohos.account.appAccount.action.oauth | Action of providing the OAuth service. |
| ACTION_APP_ACCOUNT_AUTH <sup>9+</sup> | account.appAccount.action.auth | Action of providing the authentication service. |
| ACTION_MARKET_DOWNLOAD <sup>9+</sup> | ohos.want.action.marketDownload | Action of downloading an application from the application market.<br>**System API**: This is a system API and cannot be called by third-party applications. |
......
......@@ -21,14 +21,14 @@ Enumerates the ability states. This enum can be used together with [AbilityRunni
**System API**: This enum is an internal definition of a system API and cannot be called by third-party applications.
| Name| Value| Description|
| Name| Value| Description|
| -------- | -------- | -------- |
| INITIAL | 0 | The ability is in the initial state.|
| FOCUS | 2 | The ability has the focus.|
| FOREGROUND | 9 | The ability is in the foreground state. |
| BACKGROUND | 10 | The ability is in the background state. |
| FOREGROUNDING | 11 | The ability is in the state of being switched to the foreground. |
| BACKGROUNDING | 12 | The ability is in the state of being switched to the background. |
| INITIAL | 0 | The ability is in the initial state.|
| FOCUS | 2 | The ability has the focus. |
| FOREGROUND | 9 | The ability is in the foreground state. |
| BACKGROUND | 10 | The ability is in the background state. |
| FOREGROUNDING | 11 | The ability is in the state of being switched to the foreground. |
| BACKGROUNDING | 12 | The ability is in the state of being switched to the background. |
## updateConfiguration
......@@ -39,7 +39,7 @@ Updates the configuration. This API uses an asynchronous callback to return the
**Permission required**: ohos.permission.UPDATE_CONFIGURATION
**System capability**: SystemCapability.Ability.AbilityRuntime.Core
**Parameters**
| Name | Type | Mandatory | Description |
......@@ -64,7 +64,7 @@ const config = {
language: 'Zh-Hans', // Simplified Chinese.
colorMode: COLOR_MODE_LIGHT, // Light theme.
direction: DIRECTION_VERTICAL, // Vertical direction.
screenDensity: SCREEN_DENSITY_SDPI, // The screen resolution is SDPI.
screenDensity: SCREEN_DENSITY_SDPI, // The screen pixel density is 'sdpi'.
displayId: 1, // The application is displayed on the display with ID 1.
hasPointerDevice: true, // A pointer device is connected.
};
......@@ -76,7 +76,7 @@ try {
} else {
console.log('updateConfiguration success.');
}
})
});
} catch (paramError) {
console.log('error.code: ' + JSON.stringify(paramError.code)
+ ' error.message: ' + JSON.stringify(paramError.message));
......@@ -122,7 +122,7 @@ const config = {
language: 'Zh-Hans', // Simplified Chinese.
colorMode: COLOR_MODE_LIGHT, // Light theme.
direction: DIRECTION_VERTICAL, // Vertical direction.
screenDensity: SCREEN_DENSITY_SDPI, // The screen resolution is SDPI.
screenDensity: SCREEN_DENSITY_SDPI, // The screen pixel density is 'sdpi'.
displayId: 1, // The application is displayed on the display with ID 1.
hasPointerDevice: true, // A pointer device is connected.
};
......@@ -132,7 +132,7 @@ try {
console.log('updateConfiguration success.');
}).catch((err) => {
console.log('updateConfiguration fail, err: ' + JSON.stringify(err));
})
});
} catch (paramError) {
console.log('error.code: ' + JSON.stringify(paramError.code)
+ ' error.message: ' + JSON.stringify(paramError.message));
......@@ -153,7 +153,7 @@ Obtains the UIAbility running information. This API uses an asynchronous callbac
| Name | Type | Mandatory | Description |
| --------- | ---------------------------------------- | ---- | -------------- |
| callback | AsyncCallback\<Array\<[AbilityRunningInfo](js-apis-inner-application-abilityRunningInfo.md)>> | Yes | Callback used to return the API call result and the UIAbility running information. You can perform error handling or custom processing in this callback. |
| callback | AsyncCallback\<Array\<[AbilityRunningInfo](js-apis-inner-application-abilityRunningInfo.md)>> | Yes | Callback used to return the API call result and the UIAbility running information. You can perform error handling or custom processing in this callback. |
**Error codes**
......@@ -274,7 +274,7 @@ try {
getExtensionRunningInfos(upperLimit: number): Promise\<Array\<ExtensionRunningInfo>>
Obtains the ExtensionAbility running information. This API uses a promise to return the result.
**Required permissions**: ohos.permission.GET_RUNNING_INFO
**System capability**: SystemCapability.Ability.AbilityRuntime.Core
......@@ -311,7 +311,7 @@ try {
console.log('getExtensionRunningInfos success, data: ' + JSON.stringify(data));
}).catch((err) => {
console.log('getExtensionRunningInfos fail, err: ' + JSON.stringify(err));
})
});
} catch (paramError) {
console.log('error.code: ' + JSON.stringify(paramError.code)
+ ' error.message: ' + JSON.stringify(paramError.message));
......@@ -359,7 +359,7 @@ abilityManager.getTopAbility((err, data) => {
getTopAbility(): Promise\<ElementName>;
Obtains the top ability, which is the ability that has the window focus. This API uses a promise to return the result.
**System capability**: SystemCapability.Ability.AbilityRuntime.Core
**Return value**
......@@ -385,5 +385,5 @@ abilityManager.getTopAbility().then((data) => {
console.log('getTopAbility success, data: ' + JSON.stringify(data));
}).catch((err) => {
console.log('getTopAbility fail, err: ' + JSON.stringify(err));
})
});
```
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册