提交 1be5af88 编写于 作者: X xsz233

Merge branch 'master' of gitee.com:openharmony/docs into master

Signed-off-by: Nxsz233 <xushizhe@huawei.com>

要显示的变更太多。

To preserve performance only 1000 of 1000+ files are displayed.
......@@ -155,9 +155,9 @@ zh-cn/application-dev/work-scheduler/ @HelloCrease
zh-cn/application-dev/internationalization/ @HelloCrease
zh-cn/application-dev/device/usb-overview.md @ge-yafang
zh-cn/application-dev/device/usb-guidelines.md @ge-yafang
zh-cn/application-dev/device/device-location-overview.md @zengyawen
zh-cn/application-dev/device/device-location-info.md @zengyawen
zh-cn/application-dev/device/device-location-geocoding.md @zengyawen
zh-cn/application-dev/device/device-location-overview.md @RayShih
zh-cn/application-dev/device/device-location-info.md @RayShih
zh-cn/application-dev/device/device-location-geocoding.md @RayShih
zh-cn/application-dev/device/sensor-overview.md @HelloCrease
zh-cn/application-dev/device/sensor-guidelines.md @HelloCrease
zh-cn/application-dev/device/vibrator-overview.md @HelloCrease
......@@ -181,7 +181,7 @@ zh-cn/application-dev/napi/drawing-guidelines.md @ge-yafang
zh-cn/application-dev/napi/rawfile-guidelines.md @HelloCrease
zh-cn/application-dev/reference/js-service-widget-ui/ @HelloCrease
zh-cn/application-dev/faqs/ @zengyawen
zh-cn/application-dev/file-management/ @qinxiaowang
zh-cn/application-dev/file-management/ @zengyawen
zh-cn/application-dev/application-test/ @HelloCrease
zh-cn/application-dev/device-usage-statistics/ @HelloCrease
......@@ -212,7 +212,7 @@ zh-cn/application-dev/reference/apis/js-apis-audio.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-camera.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-image.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-media.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-medialibrary.md @qinxiaowang
zh-cn/application-dev/reference/apis/js-apis-medialibrary.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-i18n.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-intl.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-resource-manager.md @HelloCrease
......@@ -239,13 +239,13 @@ zh-cn/application-dev/reference/apis/js-apis-system-storage.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-data-rdb.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-settings.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-data-resultset.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-document.md @qinxiaowang
zh-cn/application-dev/reference/apis/js-apis-environment.md @qinxiaowang
zh-cn/application-dev/reference/apis/js-apis-fileio.md @qinxiaowang
zh-cn/application-dev/reference/apis/js-apis-filemanager.md @qinxiaowang
zh-cn/application-dev/reference/apis/js-apis-statfs.md @qinxiaowang
zh-cn/application-dev/reference/apis/js-apis-storage-statistics.md @qinxiaowang
zh-cn/application-dev/reference/apis/js-apis-volumemanager.md @qinxiaowang
zh-cn/application-dev/reference/apis/js-apis-document.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-environment.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-fileio.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-filemanager.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-statfs.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-storage-statistics.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-volumemanager.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-contact.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-call.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-observer.md @zengyawen
......@@ -302,38 +302,38 @@ zh-cn/application-dev/reference/apis/js-apis-vibrator.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-appAccount.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-distributed-account.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-osAccount.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-convertxml.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-process.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-uri.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-url.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-util.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-arraylist.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-deque.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-hashmap.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-hashset.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-lightweightmap.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-lightweightset.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-linkedlist.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-list.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-plainarray.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-queue.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-stack.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-treemap.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-treeset.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-vector.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-worker.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-xml.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-testRunner.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-convertxml.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-process.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-uri.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-url.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-util.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-arraylist.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-deque.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-hashmap.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-hashset.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-lightweightmap.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-lightweightset.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-linkedlist.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-list.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-plainarray.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-queue.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-stack.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-treemap.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-treeset.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-vector.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-worker.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-xml.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-testRunner.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-resourceschedule-backgroundTaskManager.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-uitest.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-hisysevent.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-privacyManager.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-EnterpriseAdminExtensionAbility.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-animator.md @HelloCrease @qieqiewl @tomatodevboy @niulihua
zh-cn/application-dev/reference/apis/js-apis-uiappearance.md @HelloCrease @qieqiewl @tomatodevboy @niulihua
zh-cn/application-dev/reference/apis/js-apis-useriam-faceauth.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-userfilemanager.md @qinxiaowang
zh-cn/application-dev/reference/apis/js-apis-userfilemanager.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-cryptoFramework.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-buffer.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-buffer.md @ge-yafang
zh-cn/application-dev/reference/apis/development-intro.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-accessibility-extension-context.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-application-applicationContext.md @RayShih
......@@ -366,7 +366,7 @@ zh-cn/application-dev/reference/apis/js-apis-mouseevent.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-nfcController.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-nfctech.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-pointer.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-securityLabel.md @qinxiaowang
zh-cn/application-dev/reference/apis/js-apis-securityLabel.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-app.md @RayShih @shuaytao @wangzhen107 @inter515
zh-cn/application-dev/reference/apis/js-apis-system-battery.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-bluetooth.md @RayShih
......@@ -374,7 +374,7 @@ zh-cn/application-dev/reference/apis/js-apis-system-brightness.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-configuration.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-system-device.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-fetch.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-file.md @qinxiaowang
zh-cn/application-dev/reference/apis/js-apis-system-file.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-location.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-system-mediaquery.md @HelloCrease @qieqiewl @tomatodevboy @niulihua
zh-cn/application-dev/reference/apis/js-apis-system-network.md @zengyawen
......@@ -390,8 +390,8 @@ zh-cn/application-dev/reference/apis/js-apis-accessibility-config.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-Bundle-BundleStatusCallback.md @RayShih @shuaytao @wangzhen107 @inter515
zh-cn/application-dev/reference/apis/js-apis-bundle-PackInfo.md @RayShih @shuaytao @wangzhen107 @inter515
zh-cn/application-dev/reference/apis/js-apis-enterpriseDeviceManager-DeviceSettingsManager.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-fileAccess.md @qinxiaowang
zh-cn/application-dev/reference/apis/js-apis-fileExtensionInfo.md @qinxiaowang
zh-cn/application-dev/reference/apis/js-apis-fileAccess.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-fileExtensionInfo.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-net-ethernet.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-net-policy.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-net-sharing.md @zengyawen
......@@ -450,16 +450,6 @@ zh-cn/application-dev/reference/apis/js-apis-formprovider.md @RayShih @littlejer
zh-cn/application-dev/reference/apis/js-apis-inputmethod-extension-ability.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-inputmethod-extension-context.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-inputmethod-subtype.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-inputmethod-framework.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-usb.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-datashare.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-colorspace-manager.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-display.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-distributed-data_object.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-distributedKVStore.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-pasteboard.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-preferences.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-window.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-application-quickFixManager.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/apis/js-apis-missionManager.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/apis/js-apis-particleAbility.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
......@@ -470,3 +460,62 @@ zh-cn/application-dev/reference/apis/js-apis-service-extension-ability.md @RaySh
zh-cn/application-dev/reference/apis/js-apis-service-extension-context.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/apis/js-apis-wantAgent.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/errorcodes/errcode-ability.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-access-token.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-accessibility.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-account.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-animator.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-app-account.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-audio.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-avsession.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-backgroundTaskMgr.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-batteryStatistics.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-brightness.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-buffer.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-bundle.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-colorspace-manager.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-CommonEventService.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-containers.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-data-rdb.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-datashare.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-device-manager.md @qinxiaowang
zh-cn/application-dev/reference/errorcodes/errcode-DeviceUsageStatistics.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-display.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-distributed-dataObject.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-distributedKVStore.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-DistributedNotificationService.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-DistributedSchedule.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-enterpriseDeviceManager.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-faultlogger.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-filemanagement.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-geoLocationManager.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-hiappevent.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-hisysevent.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-hiviewdfx-hidebug.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-huks.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-i18n.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-inputmethod-framework.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-multimodalinput.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-nfc.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-pasteboard.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-power.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-preferences.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-promptAction.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-reminderAgentManager.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-request.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-resource-manager.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-router.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-rpc.md @qinxiaowang
zh-cn/application-dev/reference/errorcodes/errcode-runninglock.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-sensor.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-system-parameterV9.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-thermal.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-uitest.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-universal.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-update.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-usb.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-vibrator.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-webview.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-window.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-workScheduler.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-zlib.md @RayShih
......@@ -150,7 +150,7 @@ Currently, the OpenHarmony community supports 17 types of development boards, wh
| System Type| Board Model| Chip Model| <div style="width:200pt">Function Description and Use Case</div> | Application Scenario| Code Repository |
| -------- | -------- | -------- | -------- | -------- | -------- |
| Standard system| Runhe HH-SCDAYU200| RK3568 | <div style="width:200pt">Function description:<br>Bolstered by the Rockchip RK3568, the HH-SCDAYU200 development board integrates the dual-core GPU and efficient NPU. Its quad-core 64-bit Cortex-A55 processor uses the advanced 22 nm fabrication process and is clocked at up to 2.0 GHz. The board is packed with Bluetooth, Wi-Fi, audio, video, and camera features, with a wide range of expansion ports to accommodate various video input and outputs. It comes with dual GE auto-sensing RJ45 ports, so it can be used in multi-connectivity products, such as network video recorders (NVRs) and industrial gateways.<br>Use case:<br>[DAYU200 Use Case](device-dev/porting/porting-dayu200-on_standard-demo.md)</div> | Entertainment, easy travel, and smart home, such as kitchen hoods, ovens, and treadmills.| [device_soc_rockchip](https://gitee.com/openharmony/device_soc_rockchip)<br>[device_board_hihope](https://gitee.com/openharmony/device_board_hihope)<br>[vendor_hihope](https://gitee.com/openharmony/vendor_hihope) <br> |
| Standard system| Runhe HH-SCDAYU200| RK3568 | <div style="width:200pt">Function description:<br>Bolstered by the Rockchip RK3568, the HH-SCDAYU200 development board integrates the dual-core GPU and efficient NPU. Its quad-core 64-bit Cortex-A55 processor uses the advanced 22 nm fabrication process and is clocked at up to 2.0 GHz. The board is packed with Bluetooth, Wi-Fi, audio, video, and camera features, with a wide range of expansion ports to accommodate various video input and outputs. It comes with dual GE auto-sensing RJ45 ports, so it can be used in multi-connectivity products, such as network video recorders (NVRs) and industrial gateways.</div> | Entertainment, easy travel, and smart home, such as kitchen hoods, ovens, and treadmills.| [device_soc_rockchip](https://gitee.com/openharmony/device_soc_rockchip)<br>[device_board_hihope](https://gitee.com/openharmony/device_board_hihope)<br>[vendor_hihope](https://gitee.com/openharmony/vendor_hihope) <br> |
| Small system| Hispark_Taurus | Hi3516DV300 | <div style="width:200pt">Function Description:<br>Hi3516D V300 is the next-generation system on chip (SoC) for smart HD IP cameras. It integrates the next-generation image signal processor (ISP), H.265 video compression encoder, and high-performance NNIE engine, and delivers high performance in terms of low bit rate, high image quality, intelligent processing and analysis, and low power consumption.</div> | Smart device with screens, such as refrigerators with screens and head units.| [device_soc_hisilicon](https://gitee.com/openharmony/device_soc_hisilicon)<br>[device_board_hisilicon](https://gitee.com/openharmony/device_board_hisilicon)<br>[vendor_hisilicon](https://gitee.com/openharmony/vendor_hisilicon) <br> |
| Mini system| Multi-modal V200Z-R | BES2600 | <div style="width:200pt">Function description:<br>The multi-modal V200Z-R development board is a high-performance, multi-functional, and cost-effective AIoT SoC powered by the BES2600WM chip of Bestechnic. It integrates a quad-core ARM processor with a frequency of up to 1 GHz as well as dual-mode Wi-Fi and dual-mode Bluetooth. The board supports the 802.11 a/b/g/n/ and BT/BLE 5.2 standards. It is able to accommodate RAM of up to 42 MB and flash memory of up to 32 MB, and supports the MIPI display serial interface (DSI) and camera serial interface (CSI). It is applicable to various AIoT multi-modal VUI and GUI interaction scenarios.<br>Use case:<br>[Multi-modal V200Z-R Use Case](device-dev/porting/porting-bes2600w-on-minisystem-display-demo.md)</div> | Smart hardware, and smart devices with screens, such as speakers and watches.| [device_soc_bestechnic](https://gitee.com/openharmony/device_soc_bestechnic)<br>[device_board_fnlink](https://gitee.com/openharmony/device_board_fnlink)<br>[vendor_bestechnic](https://gitee.com/openharmony/vendor_bestechnic) <br> |
......
......@@ -7,7 +7,7 @@ To ensure successful communications between the client and server, interfaces re
![IDL-interface-description](./figures/IDL-interface-description.png)
IDL provides the following functions:
**IDL provides the following functions:**
- Declares interfaces provided by system services for external systems, and based on the interface declaration, generates C, C++, JS, or TS code for inter-process communication (IPC) or remote procedure call (RPC) proxies and stubs during compilation.
......@@ -17,7 +17,7 @@ IDL provides the following functions:
![IPC-RPC-communication-model](./figures/IPC-RPC-communication-model.png)
IDL has the following advantages:
**IDL has the following advantages:**
- Services are defined in the form of interfaces in IDL. Therefore, you do not need to focus on implementation details.
......@@ -433,7 +433,7 @@ export default {
console.log('ServiceAbility want:' + JSON.stringify(want));
console.log('ServiceAbility want name:' + want.bundleName)
} catch(err) {
console.log("ServiceAbility error:" + err)
console.log('ServiceAbility error:' + err)
}
console.info('ServiceAbility onConnect end');
return new IdlTestImp('connect');
......@@ -455,13 +455,13 @@ import featureAbility from '@ohos.ability.featureAbility';
function callbackTestIntTransaction(result: number, ret: number): void {
if (result == 0 && ret == 124) {
console.log("case 1 success ");
console.log('case 1 success');
}
}
function callbackTestStringTransaction(result: number): void {
if (result == 0) {
console.log("case 2 success ");
console.log('case 2 success');
}
}
......@@ -472,17 +472,17 @@ var onAbilityConnectDone = {
testProxy.testStringTransaction('hello', callbackTestStringTransaction);
},
onDisconnect:function (elementName) {
console.log("onDisconnectService onDisconnect");
console.log('onDisconnectService onDisconnect');
},
onFailed:function (code) {
console.log("onDisconnectService onFailed");
console.log('onDisconnectService onFailed');
}
};
function connectAbility: void {
let want = {
"bundleName":"com.example.myapplicationidl",
"abilityName": "com.example.myapplicationidl.ServiceAbility"
bundleName: 'com.example.myapplicationidl',
abilityName: 'com.example.myapplicationidl.ServiceAbility'
};
let connectionId = -1;
connectionId = featureAbility.connectAbility(want, onAbilityConnectDone);
......@@ -495,7 +495,7 @@ function connectAbility: void {
You can send a class from one process to another through IPC interfaces. However, you must ensure that the peer can use the code of this class and this class supports the **marshalling** and **unmarshalling** methods. OpenHarmony uses **marshalling** and **unmarshalling** to serialize and deserialize objects into objects that can be identified by each process.
To create a class that supports the sequenceable type, perform the following operations:
**To create a class that supports the sequenceable type, perform the following operations:**
1. Implement the **marshalling** method, which obtains the current state of the object and serializes the object into a **Parcel** object.
2. Implement the **unmarshalling** method, which deserializes the object from a **Parcel** object.
......@@ -595,7 +595,7 @@ export default class IdlTestServiceProxy implements IIdlTestService {
let _reply = new rpc.MessageParcel();
_data.writeInt(data);
this.proxy.sendRequest(IdlTestServiceProxy.COMMAND_TEST_INT_TRANSACTION, _data, _reply, _option).then(function(result) {
if (result.errCode === 0) {
if (result.errCode == 0) {
let _errCode = result.reply.readInt();
if (_errCode != 0) {
let _returnValue = undefined;
......@@ -605,7 +605,7 @@ export default class IdlTestServiceProxy implements IIdlTestService {
let _returnValue = result.reply.readInt();
callback(_errCode, _returnValue);
} else {
console.log("sendRequest failed, errCode: " + result.errCode);
console.log('sendRequest failed, errCode: ' + result.errCode);
}
})
}
......@@ -617,11 +617,11 @@ export default class IdlTestServiceProxy implements IIdlTestService {
let _reply = new rpc.MessageParcel();
_data.writeString(data);
this.proxy.sendRequest(IdlTestServiceProxy.COMMAND_TEST_STRING_TRANSACTION, _data, _reply, _option).then(function(result) {
if (result.errCode === 0) {
if (result.errCode == 0) {
let _errCode = result.reply.readInt();
callback(_errCode);
} else {
console.log("sendRequest failed, errCode: " + result.errCode);
console.log('sendRequest failed, errCode: ' + result.errCode);
}
})
}
......@@ -644,12 +644,12 @@ import nativeMgr from 'nativeManager';
function testIntTransactionCallback(errCode: number, returnValue: number)
{
console.log("errCode: " + errCode + " returnValue: " + returnValue);
console.log('errCode: ' + errCode + ' returnValue: ' + returnValue);
}
function testStringTransactionCallback(errCode: number)
{
console.log("errCode: " + errCode);
console.log('errCode: ' + errCode);
}
function jsProxyTriggerCppStub()
......@@ -660,6 +660,6 @@ function jsProxyTriggerCppStub()
tsProxy.testIntTransaction(10, testIntTransactionCallback);
// Call testStringTransaction.
tsProxy.testStringTransaction("test", testIntTransactionCallback);
tsProxy.testStringTransaction('test', testIntTransactionCallback);
}
```
......@@ -8,14 +8,13 @@
- Quick Start
- Getting Started
- [Preparations](quick-start/start-overview.md)
- [Getting Started with eTS in Stage Model](quick-start/start-with-ets-stage.md)
- [Getting Started with eTS in FA Model](quick-start/start-with-ets-fa.md)
- [Getting Started with ArkTS in Stage Model](quick-start/start-with-ets-stage.md)
- [Getting Started with ArkTS in FA Model](quick-start/start-with-ets-fa.md)
- [Getting Started with JavaScript in FA Model](quick-start/start-with-js-fa.md)
- Development Fundamentals
- [Application Package Structure Configuration File (FA Model)](quick-start/package-structure.md)
- [Application Package Structure Configuration File (Stage Model)](quick-start/stage-structure.md)
- [SysCap](quick-start/syscap.md)
- [HarmonyAppProvision Configuration File](quick-start/app-provision-structure.md)
- Development
- [Ability Development](ability/Readme-EN.md)
......
# Ability Development
- [Ability Framework Overview](ability-brief.md)
- [Context Usage](context-userguide.md)
- FA Model
......@@ -19,5 +20,3 @@
- [Ability Assistant Usage](ability-assistant-guidelines.md)
- [ContinuationManager Development](continuationmanager.md)
- [Test Framework Usage](ability-delegator.md)
......@@ -96,13 +96,13 @@ Obtain the context by calling **context.getApplicationContext()** in **Ability**
**Example**
```javascript
import AbilityStage from "@ohos.application.AbilityStage";
import Ability from "@ohos.application.Ability";
var lifecycleid;
export default class MyAbilityStage extends AbilityStage {
export default class MainAbility extends Ability {
onCreate() {
console.log("MyAbilityStage onCreate")
console.log("MainAbility onCreate")
let AbilityLifecycleCallback = {
onAbilityCreate(ability){
console.log("AbilityLifecycleCallback onAbilityCreate ability:" + JSON.stringify(ability));
......@@ -141,11 +141,11 @@ export default class MyAbilityStage extends AbilityStage {
// 2. Use applicationContext to register and listen for the ability lifecycle in the application.
lifecycleid = applicationContext.registerAbilityLifecycleCallback(AbilityLifecycleCallback);
console.log("registerAbilityLifecycleCallback number: " + JSON.stringify(lifecycleid));
}
},
onDestroy() {
let applicationContext = this.context.getApplicationContext();
applicationContext.unregisterAbilityLifecycleCallback(lifecycleid, (error, data) => {
console.log("unregisterAbilityLifecycleCallback success, err: " + JSON.stringify(error));
console.log("unregisterAbilityLifecycleCallback success, err: " + JSON.stringify(error));
});
}
}
......@@ -211,7 +211,13 @@ export default class MainAbility extends Ability {
let context = this.context;
console.log("[Demo] MainAbility bundleName " + context.abilityInfo.bundleName)
windowStage.setUIContent(this.context, "pages/index", null)
windowStage.loadContent("pages/index", (err, data) => {
if (err.code) {
console.error('Failed to load the content. Cause:' + JSON.stringify(err));
return;
}
console.info('Succeeded in loading the content. Data: ' + JSON.stringify(data))
});
}
onWindowStageDestroy() {
......@@ -237,7 +243,7 @@ For details, see [FormExtensionContext](../reference/apis/js-apis-formextensionc
### Obtaining the Context on an ArkTS Page
In the stage model, in the `onWindowStageCreate` lifecycle of an ability, you can call `SetUIContent` of `WindowStage` to load an ArkTS page. In some scenarios, you need to obtain the context on the page to call related APIs.
In the stage model, in the onWindowStageCreate lifecycle of an ability, you can call **SetUIContent** of **WindowStage** to load an ArkTS page. In some scenarios, you need to obtain the context on the page to call related APIs.
**How to Obtain**
......@@ -245,7 +251,7 @@ Use the API described in the table below to obtain the context associated with a
| API | Description |
| :------------------------------------ | :--------------------------- |
| getContext(component: Object): Object | Obtains the `Context` object associated with a component on the page.|
| getContext(component: Object): Object | Obtains the **Context** object associated with a component on the page.|
**Example**
......
......@@ -14,20 +14,20 @@ As the entry of the ability continuation capability, **continuationManager** is
## Available APIs
| API | Description|
| ---------------------------------------------------------------------------------------------- | ----------- |
| register(callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API does not involve any filter parameters and uses an asynchronous callback to return the result.|
| register(options: ContinuationExtraParams, callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API uses an asynchronous callback to return the result.|
| register(options?: ContinuationExtraParams): Promise\<number> | Registers the continuation management service and obtains a token. This API uses a promise to return the result.|
| on(type: "deviceConnect", token: number, callback: Callback\<Array\<ContinuationResult>>): void | Subscribes to device connection events. This API uses an asynchronous callback to return the result.|
| on(type: "deviceDisconnect", token: number, callback: Callback\<Array\<string>>): void | Subscribes to device disconnection events. This API uses an asynchronous callback to return the result.|
| off(type: "deviceConnect", token: number): void | Unsubscribes from device connection events.|
| off(type: "deviceDisconnect", token: number): void | Unsubscribes from device disconnection events.|
| startDeviceManager(token: number, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API does not involve any filter parameters and uses an asynchronous callback to return the result.|
| startDeviceManager(token: number, options: ContinuationExtraParams, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API uses an asynchronous callback to return the result.|
| startDeviceManager(token: number, options?: ContinuationExtraParams): Promise\<void> | Starts the device selection module to show the list of available devices. This API uses a promise to return the result.|
| updateConnectStatus(token: number, deviceId: string, status: DeviceConnectState, callback: AsyncCallback\<void>): void | Instructs the device selection module to update the device connection state. This API uses an asynchronous callback to return the result.|
| updateConnectStatus(token: number, deviceId: string, status: DeviceConnectState): Promise\<void> | Instructs the device selection module to update the device connection state. This API uses a promise to return the result.|
| unregister(token: number, callback: AsyncCallback\<void>): void | Deregisters the continuation management service. This API uses an asynchronous callback to return the result.|
| unregister(token: number): Promise\<void> | Deregisters the continuation management service. This API uses a promise to return the result.|
| registerContinuation(callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API does not involve any filter parameters and uses an asynchronous callback to return the result.|
| registerContinuation(options: ContinuationExtraParams, callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API uses an asynchronous callback to return the result.|
| registerContinuation(options?: ContinuationExtraParams): Promise\<number> | Registers the continuation management service and obtains a token. This API uses a promise to return the result.|
| on(type: "deviceSelected", token: number, callback: Callback\<Array\<ContinuationResult>>): void | Subscribes to device connection events. This API uses an asynchronous callback to return the result.|
| on(type: "deviceUnselected", token: number, callback: Callback\<Array\<ContinuationResult>>): void | Subscribes to device disconnection events. This API uses an asynchronous callback to return the result.|
| off(type: "deviceSelected", token: number): void | Unsubscribes from device connection events.|
| off(type: "deviceUnselected", token: number): void | Unsubscribes from device disconnection events.|
| startContinuationDeviceManager(token: number, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API does not involve any filter parameters and uses an asynchronous callback to return the result.|
| startContinuationDeviceManager(token: number, options: ContinuationExtraParams, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API uses an asynchronous callback to return the result.|
| startContinuationDeviceManager(token: number, options?: ContinuationExtraParams): Promise\<void> | Starts the device selection module to show the list of available devices. This API uses a promise to return the result.|
| updateContinuationState(token: number, deviceId: string, status: DeviceConnectState, callback: AsyncCallback\<void>): void | Instructs the device selection module to update the device connection state. This API uses an asynchronous callback to return the result.|
| updateContinuationState(token: number, deviceId: string, status: DeviceConnectState): Promise\<void> | Instructs the device selection module to update the device connection state. This API uses a promise to return the result.|
| unregisterContinuation(token: number, callback: AsyncCallback\<void>): void | Deregisters the continuation management service. This API uses an asynchronous callback to return the result.|
| unregisterContinuation(token: number): Promise\<void> | Deregisters the continuation management service. This API uses a promise to return the result.|
## How to Develop
1. Import the **continuationManager** module.
......@@ -36,7 +36,7 @@ As the entry of the ability continuation capability, **continuationManager** is
import continuationManager from '@ohos.continuation.continuationManager';
```
2. Apply for permissions required for cross-device continuation or collaboration operations.
2. Apply for the **DISTRIBUTED_DATASYNC** permission.
The permission application operation varies according to the ability model in use. In the FA mode, add the required permission in the `config.json` file, as follows:
......@@ -57,6 +57,7 @@ As the entry of the ability continuation capability, **continuationManager** is
```ts
import abilityAccessCtrl from "@ohos.abilityAccessCtrl";
import bundle from '@ohos.bundle';
import featureAbility from '@ohos.ability.featureAbility';
async function requestPermission() {
let permissions: Array<string> = [
......@@ -124,7 +125,8 @@ As the entry of the ability continuation capability, **continuationManager** is
// If the permission is not granted, call requestPermissionsFromUser to apply for the permission.
if (needGrantPermission) {
try {
await globalThis.abilityContext.requestPermissionsFromUser(permissions);
// globalThis.context is Ability.context, which must be assigned a value in the MainAbility.ts file in advance.
await globalThis.context.requestPermissionsFromUser(permissions);
} catch (err) {
console.error('app permission request permissions error' + JSON.stringify(err));
}
......@@ -140,13 +142,16 @@ As the entry of the ability continuation capability, **continuationManager** is
```ts
let token: number = -1; // Used to save the token returned after the registration. The token will be used when listening for device connection/disconnection events, starting the device selection module, and updating the device connection state.
continuationManager.register().then((data) => {
console.info('register finished, ' + JSON.stringify(data));
token = data; // Obtain a token and assign a value to the token variable.
}).catch((err) => {
console.error('register failed, cause: ' + JSON.stringify(err));
});
try {
continuationManager.registerContinuation().then((data) => {
console.info('registerContinuation finished, ' + JSON.stringify(data));
token = data; // Obtain a token and assign a value to the token variable.
}).catch((err) => {
console.error('registerContinuation failed, cause: ' + JSON.stringify(err));
});
} catch (err) {
console.error('registerContinuation failed, cause: ' + JSON.stringify(err));
}
```
4. Listen for the device connection/disconnection state.
......@@ -156,28 +161,31 @@ As the entry of the ability continuation capability, **continuationManager** is
```ts
let remoteDeviceId: string = ""; // Used to save the information about the remote device selected by the user, which will be used for cross-device continuation or collaboration.
// The token parameter is the token obtained during the registration.
continuationManager.on("deviceConnect", token, (continuationResults) => {
console.info('registerDeviceConnectCallback len: ' + continuationResults.length);
if (continuationResults.length <= 0) {
console.info('no selected device');
return;
}
remoteDeviceId = continuationResults[0].id; // Assign the deviceId of the first selected remote device to the remoteDeviceId variable.
// Pass the remoteDeviceId parameter to want.
let want = {
deviceId: remoteDeviceId,
bundleName: 'ohos.samples.continuationmanager',
abilityName: 'MainAbility'
};
// To initiate multi-device collaboration, you must obtain the ohos.permission.DISTRIBUTED_DATASYNC permission.
globalThis.abilityContext.startAbility(want).then((data) => {
console.info('StartRemoteAbility finished, ' + JSON.stringify(data));
}).catch((err) => {
console.error('StartRemoteAbility failed, cause: ' + JSON.stringify(err));
try {
// The token parameter is the token obtained during the registration.
continuationManager.on("deviceSelected", token, (continuationResults) => {
console.info('registerDeviceSelectedCallback len: ' + continuationResults.length);
if (continuationResults.length <= 0) {
console.info('no selected device');
return;
}
remoteDeviceId = continuationResults[0].id; // Assign the deviceId of the first selected remote device to the remoteDeviceId variable.
// Pass the remoteDeviceId parameter to want.
let want = {
deviceId: remoteDeviceId,
bundleName: 'ohos.samples.continuationmanager',
abilityName: 'MainAbility'
};
globalThis.abilityContext.startAbility(want).then((data) => {
console.info('StartRemoteAbility finished, ' + JSON.stringify(data));
}).catch((err) => {
console.error('StartRemoteAbility failed, cause: ' + JSON.stringify(err));
});
});
});
} catch (err) {
console.error('on failed, cause: ' + JSON.stringify(err));
}
```
The preceding multi-device collaboration operation is performed across devices in the stage model. For details about this operation in the FA model, see [Page Ability Development](https://gitee.com/openharmony/docs/blob/master/en/application-dev/ability/fa-pageability.md).
......@@ -189,35 +197,43 @@ As the entry of the ability continuation capability, **continuationManager** is
let deviceConnectStatus: continuationManager.DeviceConnectState = continuationManager.DeviceConnectState.CONNECTED;
// The token parameter is the token obtained during the registration, and the remoteDeviceId parameter is the remoteDeviceId obtained.
continuationManager.updateConnectStatus(token, remoteDeviceId, deviceConnectStatus).then((data) => {
console.info('updateConnectStatus finished, ' + JSON.stringify(data));
}).catch((err) => {
console.error('updateConnectStatus failed, cause: ' + JSON.stringify(err));
});
try {
continuationManager.updateContinuationState(token, remoteDeviceId, deviceConnectStatus).then((data) => {
console.info('updateContinuationState finished, ' + JSON.stringify(data));
}).catch((err) => {
console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
});
} catch (err) {
console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
}
```
Listen for the device disconnection state so that the user can stop cross-device continuation or collaboration in time. The sample code is as follows:
```ts
// The token parameter is the token obtained during the registration.
continuationManager.on("deviceDisconnect", token, (deviceIds) => {
console.info('onDeviceDisconnect len: ' + deviceIds.length);
if (deviceIds.length <= 0) {
console.info('no unselected device');
return;
}
try {
// The token parameter is the token obtained during the registration.
continuationManager.on("deviceUnselected", token, (continuationResults) => {
console.info('onDeviceUnselected len: ' + continuationResults.length);
if (continuationResults.length <= 0) {
console.info('no unselected device');
return;
}
// Update the device connection state.
let unselectedDeviceId: string = deviceIds[0]; // Assign the deviceId of the first deselected remote device to the unselectedDeviceId variable.
let deviceConnectStatus: continuationManager.DeviceConnectState = continuationManager.DeviceConnectState.DISCONNECTING; // Device disconnected.
// Update the device connection state.
let unselectedDeviceId: string = continuationResults[0].id; // Assign the deviceId of the first deselected remote device to the unselectedDeviceId variable.
let deviceConnectStatus: continuationManager.DeviceConnectState = continuationManager.DeviceConnectState.DISCONNECTING; // Device disconnected.
// The token parameter is the token obtained during the registration, and the unselectedDeviceId parameter is the unselectedDeviceId obtained.
continuationManager.updateConnectStatus(token, unselectedDeviceId, deviceConnectStatus).then((data) => {
console.info('updateConnectStatus finished, ' + JSON.stringify(data));
}).catch((err) => {
console.error('updateConnectStatus failed, cause: ' + JSON.stringify(err));
// The token parameter is the token obtained during the registration, and the unselectedDeviceId parameter is the unselectedDeviceId obtained.
continuationManager.updateContinuationState(token, unselectedDeviceId, deviceConnectStatus).then((data) => {
console.info('updateContinuationState finished, ' + JSON.stringify(data));
}).catch((err) => {
console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
});
});
});
} catch (err) {
console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
}
```
5. Start the device selection module to show the list of available devices on the network.
......@@ -231,12 +247,16 @@ As the entry of the ability continuation capability, **continuationManager** is
continuationMode: continuationManager.ContinuationMode.COLLABORATION_SINGLE // Single-choice mode of the device selection module.
};
// The token parameter is the token obtained during the registration.
continuationManager.startDeviceManager(token, continuationExtraParams).then((data) => {
console.info('startDeviceManager finished, ' + JSON.stringify(data));
}).catch((err) => {
console.error('startDeviceManager failed, cause: ' + JSON.stringify(err));
});
try {
// The token parameter is the token obtained during the registration.
continuationManager.startContinuationDeviceManager(token, continuationExtraParams).then((data) => {
console.info('startContinuationDeviceManager finished, ' + JSON.stringify(data));
}).catch((err) => {
console.error('startContinuationDeviceManager failed, cause: ' + JSON.stringify(err));
});
} catch (err) {
console.error('startContinuationDeviceManager failed, cause: ' + JSON.stringify(err));
}
```
6. If you do not need to perform cross-device migration or collaboration operations, you can deregister the continuation management service, by passing the token obtained during the registration.
......@@ -244,10 +264,14 @@ As the entry of the ability continuation capability, **continuationManager** is
The sample code is as follows:
```ts
// The token parameter is the token obtained during the registration.
continuationManager.unregister(token).then((data) => {
console.info('unregister finished, ' + JSON.stringify(data));
}).catch((err) => {
console.error('unregister failed, cause: ' + JSON.stringify(err));
});
try {
// The token parameter is the token obtained during the registration.
continuationManager.unregisterContinuation(token).then((data) => {
console.info('unregisterContinuation finished, ' + JSON.stringify(data));
}).catch((err) => {
console.error('unregisterContinuation failed, cause: ' + JSON.stringify(err));
});
} catch (err) {
console.error('unregisterContinuation failed, cause: ' + JSON.stringify(err));
}
```
......@@ -32,19 +32,19 @@ Example URIs:
**Table 1** Data ability lifecycle APIs
|API|Description|
|:------|:------|
|onInitialized?(info: AbilityInfo): void|Called during ability initialization to initialize the relational database (RDB).|
|update?(uri: string, valueBucket: rdb.ValuesBucket, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<number>): void|Updates data in the database.|
|query?(uri: string, columns: Array\<string>, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<ResultSet>): void|Queries data in the database.|
|delete?(uri: string, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<number>): void|Deletes one or more data records from the database.|
|normalizeUri?(uri: string, callback: AsyncCallback\<string>): void|Normalizes the URI. A normalized URI applies to cross-device use, persistence, backup, and restore. When the context changes, it ensures that the same data item can be referenced.|
|batchInsert?(uri: string, valueBuckets: Array\<rdb.ValuesBucket>, callback: AsyncCallback\<number>): void|Inserts multiple data records into the database.|
|denormalizeUri?(uri: string, callback: AsyncCallback\<string>): void|Converts a normalized URI generated by **normalizeUri** into a denormalized URI.|
|insert?(uri: string, valueBucket: rdb.ValuesBucket, callback: AsyncCallback\<number>): void|Inserts a data record into the database.|
|openFile?(uri: string, mode: string, callback: AsyncCallback\<number>): void|Opens a file.|
|getFileTypes?(uri: string, mimeTypeFilter: string, callback: AsyncCallback\<Array\<string>>): void|Obtains the MIME type of a file.|
|getType?(uri: string, callback: AsyncCallback\<string>): void|Obtains the MIME type matching the data specified by the URI.|
|executeBatch?(ops: Array\<DataAbilityOperation>, callback: AsyncCallback\<Array\<DataAbilityResult>>): void|Operates data in the database in batches.|
|call?(method: string, arg: string, extras: PacMap, callback: AsyncCallback\<PacMap>): void|Calls a custom API.|
|onInitialized(info: AbilityInfo): void|Called during ability initialization to initialize the relational database (RDB).|
|update(uri: string, valueBucket: rdb.ValuesBucket, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<number>): void|Updates data in the database.|
|query(uri: string, columns: Array\<string>, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<ResultSet>): void|Queries data in the database.|
|delete(uri: string, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<number>): void|Deletes one or more data records from the database.|
|normalizeUri(uri: string, callback: AsyncCallback\<string>): void|Normalizes the URI. A normalized URI applies to cross-device use, persistence, backup, and restore. When the context changes, it ensures that the same data item can be referenced.|
|batchInsert(uri: string, valueBuckets: Array\<rdb.ValuesBucket>, callback: AsyncCallback\<number>): void|Inserts multiple data records into the database.|
|denormalizeUri(uri: string, callback: AsyncCallback\<string>): void|Converts a normalized URI generated by **normalizeUri** into a denormalized URI.|
|insert(uri: string, valueBucket: rdb.ValuesBucket, callback: AsyncCallback\<number>): void|Inserts a data record into the database.|
|openFile(uri: string, mode: string, callback: AsyncCallback\<number>): void|Opens a file.|
|getFileTypes(uri: string, mimeTypeFilter: string, callback: AsyncCallback\<Array\<string>>): void|Obtains the MIME type of a file.|
|getType(uri: string, callback: AsyncCallback\<string>): void|Obtains the MIME type matching the data specified by the URI.|
|executeBatch(ops: Array\<DataAbilityOperation>, callback: AsyncCallback\<Array\<DataAbilityResult>>): void|Operates data in the database in batches.|
|call(method: string, arg: string, extras: PacMap, callback: AsyncCallback\<PacMap>): void|Calls a custom API.|
## How to Develop
......@@ -55,6 +55,7 @@ Example URIs:
The following code snippet shows how to create a Data ability:
```javascript
import featureAbility from '@ohos.ability.featureAbility'
import dataAbility from '@ohos.data.dataAbility'
import dataRdb from '@ohos.data.rdb'
......@@ -66,7 +67,8 @@ Example URIs:
export default {
onInitialized(abilityInfo) {
console.info('DataAbility onInitialized, abilityInfo:' + abilityInfo.bundleName)
dataRdb.getRdbStore(STORE_CONFIG, 1, (err, store) => {
let context = featureAbility.getContext()
dataRdb.getRdbStore(context, STORE_CONFIG, 1, (err, store) => {
console.info('DataAbility getRdbStore callback')
store.executeSql(SQL_CREATE_TABLE, [])
rdbStore = store
......
# Page Ability Development
## Overview
### Concepts
The Page ability implements the ArkUI and provides the capability of interacting with developers. When you create an ability in DevEco Studio, DevEco Studio automatically creates template code. The capabilities related to the Page ability are implemented through the **featureAbility**, and the lifecycle callbacks are implemented through the callbacks in **app.js** or **app.ets**.
The Page ability implements the ArkUI and provides the capability of interacting with developers. When you create an ability in DevEco Studio, DevEco Studio automatically creates template code.
The capabilities related to the Page ability are implemented through the **featureAbility**, and the lifecycle callbacks are implemented through the callbacks in **app.js** or **app.ets**.
### Page Ability Lifecycle
**Ability lifecycle**
Introduction to the Page ability lifecycle:
The Page ability lifecycle defines all states of a Page ability, such as **INACTIVE**, **ACTIVE**, and **BACKGROUND**.
......@@ -27,27 +31,30 @@ Description of ability lifecycle states:
- **BACKGROUND**: The Page ability runs in the background. After being re-activated, the Page ability enters the **ACTIVE** state. After being destroyed, the Page ability enters the **INITIAL** state.
**The following figure shows the relationship between lifecycle callbacks and lifecycle states of the Page ability.**
The following figure shows the relationship between lifecycle callbacks and lifecycle states of the Page ability.
![fa-pageAbility-lifecycle](figures/fa-pageAbility-lifecycle.png)
You can override the lifecycle callbacks provided by the Page ability in the **app.js** or **app.ets** file. Currently, the **app.js** file provides only the **onCreate** and **onDestroy** callbacks, and the **app.ets** file provides the full lifecycle callbacks.
### Launch Type
The ability supports two launch types: singleton and multi-instance.
You can specify the launch type by setting **launchType** in the **config.json** file.
**Table 1** Introduction to startup mode
**Table 1** Startup modes
| Launch Type | Description |Description |
| ----------- | ------- |---------------- |
| standard | Multi-instance | A new instance is started each time an ability starts.|
| singleton | Singleton | Only one instance exists in the system. If an instance already exists when an ability is started, that instance is reused.|
| singleton | Singleton | The ability has only one instance in the system. If an instance already exists when an ability is started, that instance is reused.|
By default, **singleton** is used.
## Development Guidelines
### Available APIs
**Table 2** APIs provided by featureAbility
......@@ -73,23 +80,21 @@ By default, **singleton** is used.
```javascript
import featureAbility from '@ohos.ability.featureAbility'
featureAbility.startAbility({
want:
{
action: "",
entities: [""],
type: "",
deviceId: "",
bundleName: "com.example.myapplication",
/* In the FA model, abilityName consists of package and ability name. */
abilityName: "com.example.entry.secondAbility",
uri: ""
},
},
);
want: {
action: "",
entities: [""],
type: "",
deviceId: "",
bundleName: "com.example.myapplication",
/* In the FA model, abilityName consists of package and ability name. */
abilityName: "com.example.entry.secondAbility",
uri: ""
}
});
```
### Starting a Remote Page Ability
>Note
>NOTE
>
>This feature applies only to system applications, since the **getTrustedDeviceListSync** API of the **DeviceManager** class is open only to system applications.
......
# Device Usage Statistics
- [Device Usage Statistics Overview](device-usage-statistics-overview.md)
- [Device Usage Statistics Development](device-usage-statistics-dev-guide.md)
- [Device Usage Statistics Development](device-usage-statistics-use-guide.md)
......@@ -66,7 +66,7 @@ To learn more about the APIs for obtaining device location information, see [Geo
If your application needs to access the device location information when running on the background, it must be configured to be able to run on the background and be granted the **ohos.permission.LOCATION_IN_BACKGROUND** permission. In this way, the system continues to report device location information after your application moves to the background.
You can declare the required permission in your application's configuration file. For details, see [Application Package Structure Configuration File](../quick-start/stage-structure.md).
You can declare the required permission in your application's configuration file. For details, see [Access Control (Permission) Development](../security/accesstoken-guidelines.md).
2. Import the **geolocation** module by which you can implement all APIs related to the basic location capabilities.
......
# Media
- Audio
- [Audio Overview](audio-overview.md)
- [Audio Playback Development](audio-playback.md)
- [Audio Recording Development](audio-recorder.md)
- [Audio Rendering Development](audio-renderer.md)
- [Audio Stream Management Development](audio-stream-manager.md)
- [Audio Capture Development](audio-capturer.md)
- [OpenSL ES Audio Playback Development](opensles-playback.md)
- [OpenSL ES Audio Recording Development](opensles-capture.md)
- [Audio Interruption Mode Development](audio-interruptmode.md)
- Video
- [Video Playback Development](video-playback.md)
- [Video Recording Development](video-recorder.md)
- Image
- Audio
- [Audio Overview](audio-overview.md)
- [Audio Playback Development](audio-playback.md)
- [Audio Recording Development](audio-recorder.md)
- [Audio Rendering Development](audio-renderer.md)
- [Audio Stream Management Development](audio-stream-manager.md)
- [Audio Capture Development](audio-capturer.md)
- [OpenSL ES Audio Playback Development](opensles-playback.md)
- [OpenSL ES Audio Recording Development](opensles-capture.md)
- [Audio Interruption Mode Development](audio-interruptmode.md)
- Video
- [Video Playback Development](video-playback.md)
- [Video Recording Development](video-recorder.md)
- Image
- [Image Development](image.md)
- Camera
- [Camera Development](camera.md)
- [Distributed Camera Development](remote-camera.md)
- Camera
- [Camera Development](camera.md)
- [Distributed Camera Development](remote-camera.md)
# Audio Capture Development
## When to Use
## Introduction
You can use the APIs provided by **AudioCapturer** to record raw audio files.
You can use the APIs provided by **AudioCapturer** to record raw audio files, thereby implementing audio data collection.
### State Check
**Status check**: During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioCapturer** instance. This is because some operations can be performed only when the audio capturer is in a given state. If the application performs an operation when the audio capturer is not in the given state, the system may throw an exception or generate other undefined behavior.
During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioCapturer** instance. This is because some operations can be performed only when the audio capturer is in a given state. If the application performs an operation when the audio capturer is not in the given state, the system may throw an exception or generate other undefined behavior.
## Working Principles
For details about the APIs, see [AudioCapturer in Audio Management](../reference/apis/js-apis-audio.md#audiocapturer8).
This following figure shows the audio capturer state transitions.
**Figure 1** Audio capturer state transitions
![audio-capturer-state](figures/audio-capturer-state.png)
- **PREPARED**: The audio capturer enters this state by calling **create()**.
- **RUNNING**: The audio capturer enters this state by calling **start()** when it is in the **PREPARED** state or by calling **start()** when it is in the **STOPPED** state.
- **STOPPED**: The audio capturer in the **RUNNING** state can call **stop()** to stop playing audio data.
- **RELEASED**: The audio capturer in the **PREPARED** or **STOPPED** state can use **release()** to release all occupied hardware and software resources. It will not transit to any other state after it enters the **RELEASED** state.
**Figure 1** Audio capturer state
## Constraints
![](figures/audio-capturer-state.png)
Before developing the audio data collection feature, configure the **ohos.permission.MICROPHONE** permission for your application. For details about permission configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop
For details about the APIs, see [AudioCapturer in Audio Management](../reference/apis/js-apis-audio.md#audiocapturer8).
1. Use **createAudioCapturer()** to create an **AudioCapturer** instance.
Set parameters of the **AudioCapturer** instance in **audioCapturerOptions**. This instance is used to capture audio, control and obtain the recording status, and register a callback for notification.
Set parameters of the **AudioCapturer** instance in **audioCapturerOptions**. This instance is used to capture audio, control and obtain the recording state, and register a callback for notification.
```js
var audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
}
var audioCapturerInfo = {
source: audio.SourceType.SOURCE_TYPE_MIC,
capturerFlags: 1
}
var audioCapturerOptions = {
streamInfo: audioStreamInfo,
capturerInfo: audioCapturerInfo
}
let audioCapturer = await audio.createAudioCapturer(audioCapturerOptions);
var state = audioRenderer.state;
```
2. (Optional) Use **on('stateChange')** to subscribe to audio renderer state changes.
If an application needs to perform some operations when the audio renderer state is updated, the application can subscribe to the state changes. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
import audio from '@ohos.multimedia.audio';
```js
audioCapturer.on('stateChange',(state) => {
console.info('AudioCapturerLog: Changed State to : ' + state)
switch (state) {
case audio.AudioState.STATE_PREPARED:
console.info('--------CHANGE IN AUDIO STATE----------PREPARED--------------');
console.info('Audio State is : Prepared');
break;
case audio.AudioState.STATE_RUNNING:
console.info('--------CHANGE IN AUDIO STATE----------RUNNING--------------');
console.info('Audio State is : Running');
break;
case audio.AudioState.STATE_STOPPED:
console.info('--------CHANGE IN AUDIO STATE----------STOPPED--------------');
console.info('Audio State is : stopped');
break;
case audio.AudioState.STATE_RELEASED:
console.info('--------CHANGE IN AUDIO STATE----------RELEASED--------------');
console.info('Audio State is : released');
break;
default:
console.info('--------CHANGE IN AUDIO STATE----------INVALID--------------');
console.info('Audio State is : invalid');
break;
}
});
let audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
}
let audioCapturerInfo = {
source: audio.SourceType.SOURCE_TYPE_MIC,
capturerFlags: 0 // 0 is the extended flag bit of the audio capturer. The default value is 0.
}
let audioCapturerOptions = {
streamInfo: audioStreamInfo,
capturerInfo: audioCapturerInfo
}
let audioCapturer = await audio.createAudioCapturer(audioCapturerOptions);
console.log('AudioRecLog: Create audio capturer success.');
```
3. Use **start()** to start audio recording.
2. Use **start()** to start audio recording.
The capturer state will be **STATE_RUNNING** once the audio capturer is started. The application can then begin reading buffers.
```js
await audioCapturer.start();
if (audioCapturer.state == audio.AudioState.STATE_RUNNING) {
console.info('AudioRecLog: Capturer started');
} else {
console.info('AudioRecLog: Capturer start failed');
}
```
4. Use **getBufferSize()** to obtain the minimum buffer size to read.
import audio from '@ohos.multimedia.audio';
async function startCapturer() {
let state = audioCapturer.state;
// The audio capturer should be in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state after being started.
if (state != audio.AudioState.STATE_PREPARED || state != audio.AudioState.STATE_PAUSED ||
state != audio.AudioState.STATE_STOPPED) {
console.info('Capturer is not in a correct state to start');
return;
}
await audioCapturer.start();
```js
var bufferSize = await audioCapturer.getBufferSize();
console.info('AudioRecLog: buffer size: ' + bufferSize);
let state = audioCapturer.state;
if (state == audio.AudioState.STATE_RUNNING) {
console.info('AudioRecLog: Capturer started');
} else {
console.error('AudioRecLog: Capturer start failed');
}
}
```
5. Read the captured audio data and convert it to a byte stream. Call **read()** repeatedly to read the data until the application wants to stop the recording.
3. Read the captured audio data and convert it to a byte stream. Call **read()** repeatedly to read the data until the application stops the recording.
The following example shows how to write recorded data into a file.
```js
import fileio from '@ohos.fileio';
let state = audioCapturer.state;
// The read operation can be performed only when the state is STATE_RUNNING.
if (state != audio.AudioState.STATE_RUNNING) {
console.info('Capturer is not in a correct state to read');
return;
}
const path = '/data/data/.pulse_dir/capture_js.wav';
const path = '/data/data/.pulse_dir/capture_js.wav'; // Path for storing the collected audio file.
let fd = fileio.openSync(path, 0o102, 0o777);
if (fd !== null) {
console.info('AudioRecLog: file fd created');
......@@ -115,38 +110,140 @@ If an application needs to perform some operations when the audio renderer state
console.info('AudioRecLog: file fd opened in append mode');
}
var numBuffersToCapture = 150;
let numBuffersToCapture = 150; // Write data for 150 times.
while (numBuffersToCapture) {
var buffer = await audioCapturer.read(bufferSize, true);
let buffer = await audioCapturer.read(bufferSize, true);
if (typeof(buffer) == undefined) {
console.info('read buffer failed');
console.info('AudioRecLog: read buffer failed');
} else {
var number = fileio.writeSync(fd, buffer);
console.info('AudioRecLog: data written: ' + number);
let number = fileio.writeSync(fd, buffer);
console.info(`AudioRecLog: data written: ${number}`);
}
numBuffersToCapture--;
}
```
6. Once the recording is complete, call **stop()** to stop the recording.
4. Once the recording is complete, call **stop()** to stop the recording.
```js
async function StopCapturer() {
let state = audioCapturer.state;
// The audio capturer can be stopped only when it is in STATE_RUNNING or STATE_PAUSED state.
if (state != audio.AudioState.STATE_RUNNING && state != audio.AudioState.STATE_PAUSED) {
console.info('AudioRecLog: Capturer is not running or paused');
return;
}
await audioCapturer.stop();
state = audioCapturer.state;
if (state == audio.AudioState.STATE_STOPPED) {
console.info('AudioRecLog: Capturer stopped');
} else {
console.error('AudioRecLog: Capturer stop failed');
}
}
```
await audioCapturer.stop();
if (audioCapturer.state == audio.AudioState.STATE_STOPPED) {
console.info('AudioRecLog: Capturer stopped');
} else {
console.info('AudioRecLog: Capturer stop failed');
}
5. After the task is complete, call **release()** to release related resources.
```js
async function releaseCapturer() {
let state = audioCapturer.state;
// The audio capturer can be released only when it is not in the STATE_RELEASED or STATE_NEW state.
if (state == audio.AudioState.STATE_RELEASED || state == audio.AudioState.STATE_NEW) {
console.info('AudioRecLog: Capturer already released');
return;
}
await audioCapturer.release();
state = audioCapturer.state;
if (state == audio.AudioState.STATE_RELEASED) {
console.info('AudioRecLog: Capturer released');
} else {
console.info('AudioRecLog: Capturer release failed');
}
}
```
7. After the task is complete, call **release()** to release related resources.
6. (Optional) Obtain the audio capturer information.
You can use the following code to obtain the audio capturer information:
```js
await audioCapturer.release();
if (audioCapturer.state == audio.AudioState.STATE_RELEASED) {
console.info('AudioRecLog: Capturer released');
} else {
console.info('AudioRecLog: Capturer release failed');
}
// Obtain the audio capturer state.
let state = audioCapturer.state;
// Obtain the audio capturer information.
let audioCapturerInfo : audio.AuduioCapturerInfo = await audioCapturer.getCapturerInfo();
// Obtain the audio stream information.
let audioStreamInfo : audio.AudioStreamInfo = await audioCapturer.getStreamInfo();
// Obtain the audio stream ID.
let audioStreamId : number = await audioCapturer.getAudioStreamId();
// Obtain the Unix timestamp, in nanoseconds.
let audioTime : number = await audioCapturer.getAudioTime();
// Obtain a proper minimum buffer size.
let bufferSize : number = await audioCapturer.getBuffersize();
```
7. (Optional) Use **on('markReach')** to subscribe to the mark reached event, and use **off('markReach')** to unsubscribe from the event.
After the mark reached event is subscribed to, when the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned.
```js
audioCapturer.on('markReach', (reachNumber) => {
console.info('Mark reach event Received');
console.info(`The Capturer reached frame: ${reachNumber}`);
});
audioCapturer.off('markReach'); // Unsubscribe from the mark reached event. This event will no longer be listened for.
```
8. (Optional) Use **on('periodReach')** to subscribe to the period reached event, and use **off('periodReach')** to unsubscribe from the event.
After the period reached event is subscribed to, each time the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned.
```js
audioCapturer.on('periodReach', (reachNumber) => {
console.info('Period reach event Received');
console.info(`In this period, the Capturer reached frame: ${reachNumber}`);
});
audioCapturer.off('periodReach'); // Unsubscribe from the period reached event. This event will no longer be listened for.
```
9. If your application needs to perform some operations when the audio capturer state is updated, it can subscribe to the state change event. When the audio capturer state is updated, the application receives a callback containing the event type.
```js
audioCapturer.on('stateChange', (state) => {
console.info(`AudioCapturerLog: Changed State to : ${state}`)
switch (state) {
case audio.AudioState.STATE_PREPARED:
console.info('--------CHANGE IN AUDIO STATE----------PREPARED--------------');
console.info('Audio State is : Prepared');
break;
case audio.AudioState.STATE_RUNNING:
console.info('--------CHANGE IN AUDIO STATE----------RUNNING--------------');
console.info('Audio State is : Running');
break;
case audio.AudioState.STATE_STOPPED:
console.info('--------CHANGE IN AUDIO STATE----------STOPPED--------------');
console.info('Audio State is : stopped');
break;
case audio.AudioState.STATE_RELEASED:
console.info('--------CHANGE IN AUDIO STATE----------RELEASED--------------');
console.info('Audio State is : released');
break;
default:
console.info('--------CHANGE IN AUDIO STATE----------INVALID--------------');
console.info('Audio State is : invalid');
break;
}
});
```
# Audio Interruption Mode Development
## When to Use
The audio interruption mode is used to control the playback of multiple audio streams.<br>
Audio applications can set the audio interruption mode to independent or shared under **AudioRenderer**.<br>
In shared mode, multiple audio streams share one session ID. In independent mode, each audio stream has an independent session ID.
## Introduction
The audio interruption mode is used to control the playback of multiple audio streams.
Audio applications can set the audio interruption mode to independent or shared under **AudioRenderer**.
### Asynchronous Operations
In shared mode, multiple audio streams share one session ID. In independent mode, each audio stream has an independent session ID.
To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions.
**Asynchronous operation**: To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions.
## How to Develop
For details about the APIs, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8).
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.
Set parameters of the **AudioRenderer** instance in **audioRendererOptions**.
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.<br>
Set parameters of the **AudioRenderer** instance in **audioRendererOptions**.<br>
This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.<br>
```js
This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.
```js
import audio from '@ohos.multimedia.audio';
var audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
}
var audioRendererInfo = {
content: audio.ContentType.CONTENT_TYPE_SPEECH,
usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION,
rendererFlags: 1
}
var audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
}
var audioRendererInfo = {
content: audio.ContentType.CONTENT_TYPE_SPEECH,
usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION,
rendererFlags: 1
}
var audioRendererOptions = {
streamInfo: audioStreamInfo,
rendererInfo: audioRendererInfo
}
var audioRendererOptions = {
streamInfo: audioStreamInfo,
rendererInfo: audioRendererInfo
}
let audioRenderer = await audio.createAudioRenderer(audioRendererOptions);
let audioRenderer = await audio.createAudioRenderer(audioRendererOptions);
```
2. Set the audio interruption mode.
2. Set the audio interruption mode.
After the **AudioRenderer** instance is initialized, you can set the audio interruption mode.<br>
```js
......
# Audio Playback Development
## When to Use
## Introduction
You can use audio playback APIs to convert audio data into audible analog signals, play the signals using output devices, and manage playback tasks.
You can use audio playback APIs to convert audio data into audible analog signals and play the signals using output devices. You can also manage playback tasks. For example, you can start, suspend, stop playback, release resources, set the volume, seek to a playback position, and obtain track information.
**Figure 1** Playback status
## Working Principles
The following figures show the audio playback status changes and the interaction with external modules for audio playback.
**Figure 1** Audio playback state transition
![en-us_image_audio_state_machine](figures/en-us_image_audio_state_machine.png)
**Note**: If the status is **Idle**, setting the **src** attribute does not change the status. In addition, after the **src** attribute is set successfully, you must call **reset()** before setting it to another value.
**NOTE**: If the status is **Idle**, setting the **src** attribute does not change the status. In addition, after the **src** attribute is set successfully, you must call **reset()** before setting it to another value.
**Figure 2** Layer 0 diagram of audio playback
**Figure 2** Interaction with external modules for audio playback
![en-us_image_audio_player](figures/en-us_image_audio_player.png)
**NOTE**: When a third-party application calls the JS interface provided by the JS interface layer to implement a feature, the framework layer invokes the audio component through the media service of the native framework and outputs the audio data decoded by the software to the audio HDI of the hardware interface layer to implement audio playback.
## How to Develop
For details about the APIs, see [AudioPlayer in the Media API](../reference/apis/js-apis-media.md#audioplayer).
> **NOTE**
>
> The method for obtaining the path in the FA model is different from that in the stage model. **pathDir** used in the sample code below is an example. You need to obtain the path based on project requirements. For details about how to obtain the path, see [Application Sandbox Path Guidelines](../reference/apis/js-apis-fileio.md#guidelines).
### Full-Process Scenario
The full audio playback process includes creating an instance, setting the URI, playing audio, seeking to the playback position, setting the volume, pausing playback, obtaining track information, stopping playback, resetting the player, and releasing resources.
......@@ -99,8 +109,9 @@ async function audioPlayerDemo() {
setCallBack(audioPlayer); // Set the event callbacks.
// 2. Set the URI of the audio file.
let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3';
let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath);
......@@ -118,6 +129,7 @@ async function audioPlayerDemo() {
```js
import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio'
export class AudioDemo {
// Set the player callbacks.
setCallBack(audioPlayer) {
......@@ -139,8 +151,9 @@ export class AudioDemo {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3';
let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath);
......@@ -159,6 +172,7 @@ export class AudioDemo {
```js
import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio'
export class AudioDemo {
// Set the player callbacks.
private isNextMusic = false;
......@@ -185,8 +199,9 @@ export class AudioDemo {
async nextMusic(audioPlayer) {
this.isNextMusic = true;
let nextFdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\02.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command.
let nextpath = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/02.mp3';
let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\02.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let nextpath = pathDir + '/02.mp3'
await fileIO.open(nextpath).then((fdNumber) => {
nextFdPath = nextFdPath + '' + fdNumber;
console.info('open fd success fd is' + nextFdPath);
......@@ -202,8 +217,9 @@ export class AudioDemo {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3';
let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath);
......@@ -222,6 +238,7 @@ export class AudioDemo {
```js
import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio'
export class AudioDemo {
// Set the player callbacks.
setCallBack(audioPlayer) {
......@@ -239,8 +256,9 @@ export class AudioDemo {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3';
let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath);
......
# Audio Recording Development
## When to Use
## Introduction
During audio recording, audio signals are captured, encoded, and saved to files. You can specify parameters such as the sampling rate, number of audio channels, encoding format, encapsulation format, and file path for audio recording.
During audio recording, audio signals are captured, encoded, and saved to files. You can specify parameters such as the sampling rate, number of audio channels, encoding format, encapsulation format, and output file path for audio recording.
## Working Principles
The following figures show the audio recording state transition and the interaction with external modules for audio recording.
**Figure 1** Audio recording state transition
......@@ -10,10 +14,16 @@ During audio recording, audio signals are captured, encoded, and saved to files.
**Figure 2** Layer 0 diagram of audio recording
**Figure 2** Interaction with external modules for audio recording
![en-us_image_audio_recorder_zero](figures/en-us_image_audio_recorder_zero.png)
**NOTE**: When a third-party recording application or recorder calls the JS interface provided by the JS interface layer to implement a feature, the framework layer invokes the audio component through the media service of the native framework to obtain the audio data captured through the audio HDI. The framework layer then encodes the audio data through software and saves the encoded and encapsulated audio data to a file to implement audio recording.
## Constraints
Before developing audio recording, configure the **ohos.permission.MICROPHONE** permission for your application. For details about the configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop
For details about the APIs, see [AudioRecorder in the Media API](../reference/apis/js-apis-media.md#audiorecorder).
......
此差异已折叠。
# OpenSL ES Audio Recording Development
## When to Use
## Introduction
You can use OpenSL ES to develop the audio recording function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned.
......
# OpenSL ES Audio Playback Development
## When to Use
## Introduction
You can use OpenSL ES to develop the audio playback function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned.
......@@ -58,7 +58,7 @@ To use OpenSL ES to develop the audio playback function in OpenHarmony, perform
5. Obtain the **bufferQueueItf** instance of the **SL_IID_OH_BUFFERQUEUE** interface.
```
```c++
SLOHBufferQueueItf bufferQueueItf;
(*pcmPlayerObject)->GetInterface(pcmPlayerObject, SL_IID_OH_BUFFERQUEUE, &bufferQueueItf);
```
......
# Video Playback Development
## When to Use
## Introduction
You can use video playback APIs to convert video data into visible signals, play the signals using output devices, and manage playback tasks. This document describes development for the following video playback scenarios: full-process, normal playback, video switching, and loop playback.
You can use video playback APIs to convert audio data into audible analog signals and play the signals using output devices. You can also manage playback tasks. For example, you can start, suspend, stop playback, release resources, set the volume, seek to a playback position, set the playback speed, and obtain track information. This document describes development for the following video playback scenarios: full-process, normal playback, video switching, and loop playback.
## Working Principles
The following figures show the video playback state transition and the interaction with external modules for video playback.
**Figure 1** Video playback state transition
![en-us_image_video_state_machine](figures/en-us_image_video_state_machine.png)
**Figure 2** Layer 0 diagram of video playback
**Figure 2** Interaction with external modules for video playback
![en-us_image_video_player](figures/en-us_image_video_player.png)
**NOTE**: When a third-party application calls a JS interface provided by the JS interface layer, the framework layer invokes the audio component through the media service of the native framework to output the audio data decoded by the software to the audio HDI. The graphics subsystem outputs the image data decoded by the codec HDI at the hardware interface layer to the display HDI. In this way, video playback is implemented.
*Note: Video playback requires hardware capabilities such as display, audio, and codec.*
1. A third-party application obtains a surface ID from the XComponent.
......
# Video Recording Development
## When to Use
## Introduction
During video recording, audio and video signals are captured, encoded, and saved to files. You can specify parameters such as the encoding format, encapsulation format, and file path for video recording.
You can use video recording APIs to capture audio and video signals, encode them, and save them to files. You can start, suspend, resume, and stop recording, and release resources. You can also specify parameters such as the encoding format, encapsulation format, and file path for video recording.
## Working Principles
The following figures show the video recording state transition and the interaction with external modules for video recording.
**Figure 1** Video recording state transition
![en-us_image_video_recorder_state_machine](figures/en-us_image_video_recorder_state_machine.png)
**Figure 2** Layer 0 diagram of video recording
**Figure 2** Interaction with external modules for video recording
![en-us_image_video_recorder_zero](figures/en-us_image_video_recorder_zero.png)
**NOTE**: When a third-party camera application or system camera calls a JS interface provided by the JS interface layer, the framework layer uses the media service of the native framework to invoke the audio component. Through the audio HDI, the audio component captures audio data, encodes the audio data through software, and saves the encoded audio data to a file. The graphics subsystem captures image data through the video HDI, encodes the image data through the video codec HDI, and saves the encoded image data to a file. In this way, video recording is implemented.
## Constraints
Before developing video recording, configure the permissions **ohos.permission.MICROPHONE** and **ohos.permission.CAMERA** for your application. For details about the configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop
For details about the APIs, see [VideoRecorder in the Media API](../reference/apis/js-apis-media.md#videorecorder9).
......@@ -147,3 +157,4 @@ export class VideoRecorderDemo {
}
}
```
......@@ -4,6 +4,4 @@
- [Drawing Development](drawing-guidelines.md)
- [Raw File Development](rawfile-guidelines.md)
- [Native Window Development](native-window-guidelines.md)
- [Using MindSpore Lite for Model Inference](native-window-guidelines.md)
- [Using MindSpore Lite for Model Inference](mindspore-lite-guidelines.md)
......@@ -4,11 +4,11 @@
The Native Drawing module provides APIs for drawing 2D graphics and text. The following scenarios are common for drawing development:
* Drawing 2D graphics
* Drawing and painting text
* Drawing text drawing
## Available APIs
| API| Description|
| API| Description|
| -------- | -------- |
| OH_Drawing_BitmapCreate (void) | Creates a bitmap object.|
| OH_Drawing_BitmapBuild (OH_Drawing_Bitmap *, const uint32_t width, const uint32_t height, const OH_Drawing_BitmapFormat *) | Initializes the width and height of a bitmap object and sets the pixel format for the bitmap.|
......@@ -19,7 +19,7 @@ The Native Drawing module provides APIs for drawing 2D graphics and text. The fo
| OH_Drawing_CanvasDrawPath (OH_Drawing_Canvas *, const OH_Drawing_Path *) | Draws a path.|
| OH_Drawing_PathCreate (void) | Creates a path object.|
| OH_Drawing_PathMoveTo (OH_Drawing_Path *, float x, float y) | Sets the start point of a path.|
| OH_Drawing_PathLineTo (OH_Drawing_Path *, float x, float y) | Draws a line segment from the last point of a path to the target point. |
| OH_Drawing_PathLineTo (OH_Drawing_Path *, float x, float y) | Draws a line segment from the last point of a path to the target point.|
| OH_Drawing_PathClose (OH_Drawing_Path *) | Closes a path. A line segment from the start point to the last point of the path is added.|
| OH_Drawing_PenCreate (void) | Creates a pen object.|
| OH_Drawing_PenSetAntiAlias (OH_Drawing_Pen *, bool) | Checks whether anti-aliasing is enabled for a pen. If anti-aliasing is enabled, edges will be drawn with partial transparency.|
......@@ -138,7 +138,7 @@ The following steps describe how to use the canvas and brush of the Native Drawi
OH_Drawing_BitmapDestory(cBitmap);
```
## Development Procedure for Text Drawing and Display
## Development Procedure for Text Drawing
The following steps describe how to use the text drawing and display feature of the Native Drawing module.
1. **Create a canvas and a bitmap.**
......@@ -196,7 +196,8 @@ The following steps describe how to use the text drawing and display feature of
// Set the maximum width.
double maxWidth = 800.0;
OH_Drawing_TypographyLayout(typography, maxWidth);
// Set the start position for text display.
// Set the start position for drawing the text on the canvas.
double position[2] = {10.0, 15.0};
// Draw the text on the canvas.
OH_Drawing_TypographyPaint(typography, cCanvas, position[0], position[1]);
```
# NativeWindow Development
# Native Window Development
## When to Use
`NativeWindow` is a local platform window of OpenHarmony. It provides APIs for you to create a native window from `Surface`, create a native window buffer from `SurfaceBuffer`, and request and flush a buffer.
**NativeWindow** is a local platform-based window of OpenHarmony that represents the producer of a graphics queue. It provides APIs for you to create a native window from **Surface**, create a native window buffer from **SurfaceBuffer**, and request and flush a buffer.
The following scenarios are common for native window development:
* Drawing content using native C++ code and displaying the content on the screen
* Requesting and flushing a buffer when adapting to EGL `eglswapbuffer`
* Request a graphics buffer by using the NAPI provided by **NativeWindow**, write the produced graphics content to the buffer, and flush the buffer to the graphics queue.
* Request and flush a buffer when adapting to the **eglswapbuffer** interface at the EGL.
## Available APIs
| API| Description|
| -------- | -------- |
| OH_NativeWindow_CreateNativeWindowFromSurface (void \*pSurface) | Creates a `NativeWindow` instance. A new `NativeWindow` instance is created each time this function is called.|
| OH_NativeWindow_DestroyNativeWindow (struct NativeWindow \*window) | Decreases the reference count of a `NativeWindow` instance by 1 and, when the reference count reaches 0, destroys the instance.|
| OH_NativeWindow_CreateNativeWindowBufferFromSurfaceBuffer (void \*pSurfaceBuffer) | Creates a `NativeWindowBuffer` instance. A new `NativeWindowBuffer` instance is created each time this function is called.|
| OH_NativeWindow_DestroyNativeWindowBuffer (struct NativeWindowBuffer \*buffer) | Decreases the reference count of a `NativeWindowBuffer` instance by 1 and, when the reference count reaches 0, destroys the instance.|
| OH_NativeWindow_NativeWindowRequestBuffer (struct NativeWindow \*window struct NativeWindowBuffer \*\*buffer, int \*fenceFd) | Requests a `NativeWindowBuffer` through a `NativeWindow` instance for content production.|
| OH_NativeWindow_NativeWindowFlushBuffer (struct NativeWindow \*window, struct NativeWindowBuffer \*buffer, int fenceFd, Region region) | Flushes the `NativeWindowBuffer` filled with the content to the buffer queue through a `NativeWindow` instance for content consumption.|
| OH_NativeWindow_NativeWindowCancelBuffer (struct NativeWindow \*window, struct NativeWindowBuffer \*buffer) | Returns the `NativeWindowBuffer` to the buffer queue through a `NativeWindow` instance, without filling in any content. The `NativeWindowBuffer` can be used for another request.|
| OH_NativeWindow_NativeWindowHandleOpt (struct NativeWindow \*window, int code,...) | Sets or obtains the attributes of a native window, including the width, height, and content format.|
| OH_NativeWindow_GetBufferHandleFromNative (struct NativeWindowBuffer \*buffer) | Obtains the pointer to a `BufferHandle` of a `NativeWindowBuffer` instance.|
| OH_NativeWindow_CreateNativeWindowFromSurface (void \*pSurface) | Creates a **NativeWindow** instance. A new **NativeWindow** instance is created each time this function is called.|
| OH_NativeWindow_DestroyNativeWindow (OHNativeWindow \*window) | Decreases the reference count of a **NativeWindow** instance by 1 and, when the reference count reaches 0, destroys the instance.|
| OH_NativeWindow_CreateNativeWindowBufferFromSurfaceBuffer (void \*pSurfaceBuffer) | Creates a **NativeWindowBuffer** instance. A new **NativeWindowBuffer** instance is created each time this function is called.|
| OH_NativeWindow_DestroyNativeWindowBuffer (OHNativeWindowBuffer \*buffer) | Decreases the reference count of a **NativeWindowBuffer** instance by 1 and, when the reference count reaches 0, destroys the instance.|
| OH_NativeWindow_NativeWindowRequestBuffer (OHNativeWindow \*window, OHNativeWindowBuffer \*\*buffer, int \*fenceFd) | Requests a **NativeWindowBuffer** through a **NativeWindow** instance for content production.|
| OH_NativeWindow_NativeWindowFlushBuffer (OHNativeWindow \*window, OHNativeWindowBuffer \*buffer, int fenceFd, Region region) | Flushes the **NativeWindowBuffer** filled with the content to the buffer queue through a **NativeWindow** instance for content consumption.|
| OH_NativeWindow_NativeWindowAbortBuffer (OHNativeWindow \*window, OHNativeWindowBuffer \*buffer) | Returns the **NativeWindowBuffer** to the buffer queue through a **NativeWindow** instance, without filling in any content. The **NativeWindowBuffer** can be used for another request.|
| OH_NativeWindow_NativeWindowHandleOpt (OHNativeWindow \*window, int code,...) | Sets or obtains the attributes of a native window, including the width, height, and content format.|
| OH_NativeWindow_GetBufferHandleFromNative (OHNativeWindowBuffer \*buffer) | Obtains the pointer to a **BufferHandle** of a **NativeWindowBuffer** instance.|
| OH_NativeWindow_NativeObjectReference (void \*obj) | Adds the reference count of a native object.|
| OH_NativeWindow_NativeObjectUnreference (void \*obj) | Decreases the reference count of a native object and, when the reference count reaches 0, destroys this object.|
| OH_NativeWindow_GetNativeObjectMagic (void \*obj) | Obtains the magic ID of a native object.|
| OH_NativeWindow_NativeWindowSetScalingMode (OHNativeWindow \*window, uint32_t sequence, OHScalingMode scalingMode) | Sets the scaling mode of the native window.|
| OH_NativeWindow_NativeWindowSetMetaData(OHNativeWindow \*window, uint32_t sequence, int32_t size, const OHHDRMetaData \*metaData) | Sets the HDR static metadata of the native window.|
| OH_NativeWindow_NativeWindowSetMetaDataSet(OHNativeWindow \*window, uint32_t sequence, OHHDRMetadataKey key, int32_t size, const uint8_t \*metaData) | Sets the HDR static metadata set of the native window.|
| OH_NativeWindow_NativeWindowSetTunnelHandle(OHNativeWindow \*window, const OHExtDataHandle \*handle) | Sets the tunnel handle to the native window.|
## How to Develop
The following steps describe how to use `OH_NativeXComponent` in OpenHarmony to draw content using native C++ code and display the content on the screen.
1. Define an `XComponent` of the `texture` type in `index.ets` for content display.
```js
XComponent({ id: 'xcomponentId', type: 'texture', libraryname: 'nativerender'})
.borderColor(Color.Red)
.borderWidth(5)
.onLoad(() => {})
.onDestroy(() => {})
```
2. Obtain an `OH_NativeXComponent` instance (named `nativeXComponent` in this example) by calling `napi_get_named_property`, and obtain a `NativeWindow` instance by registering the callback of the `OH_NativeXComponent` instance.
The following describes how to use the NAPI provided by **NativeWindow** to request a graphics buffer, write the produced graphics content to the buffer, and flush the buffer to the graphics queue.
1. Obtain a **NativeWindow** instance. For example, use **Surface** to create a **NativeWindow** instance.
```c++
// Define a NAPI instance.
napi_value exportInstance = nullptr;
// Define an OH_NativeXComponent instance.
OH_NativeXComponent *nativeXComponent = nullptr;
// Use the OH_NATIVE_XCOMPONENT_OBJ export instance.
napi_getname_property(env, exports, OH_NATIVE_XCOMPONENT_OBJ, &exportInstance);
// Convert the NAPI instance to the OH_NativeXComponent instance.
napi_unwarp(env, exportInstance, reinterpret_cast<void**>(&nativeXComponent));
sptr<OHOS::Surface> cSurface = Surface::CreateSurfaceAsConsumer();
sptr<IBufferConsumerListener> listener = new BufferConsumerListenerTest();
cSurface->RegisterConsumerListener(listener);
sptr<OHOS::IBufferProducer> producer = cSurface->GetProducer();
sptr<OHOS::Surface> pSurface = Surface::CreateSurfaceAsProducer(producer);
OHNativeWindow* nativeWindow = OH_NativeWindow_CreateNativeWindow(&pSurface);
```
3. Define the callback `OnSurfaceCreated`. During the creation of a `Surface`, the callback is used to initialize the rendering environment, for example, the `Skia` rendering environment, and write the content to be displayed to `NativeWindow`.
2. Set the attributes of a native window buffer by using **OH_NativeWindow_NativeWindowHandleOpt**.
```c++
void OnSurfaceCreatedCB(NativeXComponent* component, void* window) {
// Obtain the width and height of the native window.
uint64_t width_ = 0, height_ = 0;
OH_NativeXComponent_GetXComponentSize(nativeXComponent, window, &width_, &height_);
// Convert void* into a NativeWindow instance. NativeWindow is defined in native_window/external_window.h.
NativeWindow* nativeWindow_ = (NativeWindow*)(window);
// Set or obtain the NativeWindow attributes by calling OH_NativeWindow_NativeWindowHandleOpt.
// 1. Use SET_USAGE to set the usage attribute of the native window, for example, to HBM_USE_CPU_READ.
OH_NativeWindow_NativeWindowHandleOpt(nativeWindow_, SET_USAGE, HBM_USE_CPU_READ | HBM_USE_CPU_WRITE |HBM_USE_MEM_DMA);
// 2. Use SET_BUFFER_GEOMETRY to set the width and height attributes of the native window.
OH_NativeWindow_NativeWindowHandleOpt(nativeWindow_, SET_BUFFER_GEOMETRY, width_, height_);
// 3. Use SET_FORMAT to set the format attribute of the native window, for example, to PIXEL_FMT_RGBA_8888.
OH_NativeWindow_NativeWindowHandleOpt(nativeWindow_, SET_FORMAT, PIXEL_FMT_RGBA_8888);
// 4. Use SET_STRIDE to set the stride attribute of the native window.
OH_NativeWindow_NativeWindowHandleOpt(nativeWindow_, SET_STRIDE, 0x8);
// Obtain the NativeWindowBuffer instance by calling OH_NativeWindow_NativeWindowRequestBuffer.
struct NativeWindowBuffer* buffer = nullptr;
int fenceFd;
OH_NativeWindow_NativeWindowRequestBuffer(nativeWindow_, &buffer, &fenceFd);
// Obtain the buffer handle by calling OH_NativeWindow_GetNativeBufferHandleFromNative.
BufferHandle* bufferHandle = OH_NativeWindow_GetNativeBufferHandleFromNative(buffer);
// Set the read and write scenarios of the native window buffer.
int code = SET_USAGE;
int32_t usage = BUFFER_USAGE_CPU_READ | BUFFER_USAGE_CPU_WRITE | BUFFER_USAGE_MEM_DMA;
int32_t ret = OH_NativeWindow_NativeWindowHandleOpt(nativeWindow, code, usage);
// Set the width and height of the native window buffer.
code = SET_BUFFER_GEOMETRY;
int32_t width = 0x100;
int32_t height = 0x100;
ret = OH_NativeWindow_NativeWindowHandleOpt(nativeWindow, code, width, height);
// Set the step of the native window buffer.
code = SET_STRIDE;
int32_t stride = 0x8;
ret = OH_NativeWindow_NativeWindowHandleOpt(nativeWindow, code, stride);
// Set the format of the native window buffer.
code = SET_FORMAT;
int32_t format = PIXEL_FMT_RGBA_8888;
ret = OH_NativeWindow_NativeWindowHandleOpt(nativeWindow, code, format);
```
// Create a Skia bitmap using BufferHandle.
SkBitmap bitmap;
SkImageInfo imageInfo = ...
bitmap.setInfo(imageInfo, bufferHandle->stride);
bitmap.setPixels(bufferHandle->virAddr);
// Create Skia Canvas and write the content to the native window.
...
3. Request a native window buffer from the graphics queue.
```c++
struct NativeWindowBuffer* buffer = nullptr;
int fenceFd;
// Obtain the NativeWindowBuffer instance by calling OH_NativeWindow_NativeWindowRequestBuffer.
OH_NativeWindow_NativeWindowRequestBuffer(nativeWindow_, &buffer, &fenceFd);
// Obtain the buffer handle by calling OH_NativeWindow_GetNativeBufferHandleFromNative.
BufferHandle* bufferHandle = OH_NativeWindow_GetNativeBufferHandleFromNative(buffer);
```
// After the write operation is complete, flush the buffer by using OH_NativeWindow_NativeWindowFlushBuffer so that the data is displayed on the screen.
Region region{nullptr, 0};
OH_NativeWindow_NativeWindowFlushBuffer(nativeWindow_, buffer, fenceFd, region)
4. Write the produced content to the native window buffer.
```c++
auto image = static_cast<uint8_t *>(buffer->sfbuffer->GetVirAddr());
static uint32_t value = 0x00;
value++;
uint32_t *pixel = static_cast<uint32_t *>(image);
for (uint32_t x = 0; x < width; x++) {
for (uint32_t y = 0; y < height; y++) {
*pixel++ = value;
}
}
```
4. Register the callback `OnSurfaceCreated` by using `OH_NativeXComponent_RegisterCallback`.
5. Flush the native window buffer to the graphics queue.
```c++
OH_NativeXComponent_Callback &callback_;
callback_->OnSurfaceCreated = OnSurfaceCreatedCB;
callback_->OnSurfaceChanged = OnSurfaceChangedCB;
callback_->OnSurfaceDestoryed = OnSurfaceDestoryedCB;
callback_->DispatchTouchEvent = DispatchTouchEventCB;
OH_NativeXComponent_RegisterCallback(nativeXComponent, callback_)
// Set the refresh region. If Rect in Region is a null pointer or rectNumber is 0, all contents in the native window buffer are changed.
Region region{nullptr, 0};
// Flush the buffer to the consumer through OH_NativeWindow_NativeWindowFlushBuffer, for example, by displaying it on the screen.
OH_NativeWindow_NativeWindowFlushBuffer(nativeWindow_, buffer, fenceFd, region);
```
......@@ -2,11 +2,12 @@
- Getting Started
- [Preparations](start-overview.md)
- [Getting Started with eTS in Stage Model](start-with-ets-stage.md)
- [Getting Started with eTS in FA Model](start-with-ets-fa.md)
- [Getting Started with ArkTS in Stage Model](start-with-ets-stage.md)
- [Getting Started with ArkTS in FA Model](start-with-ets-fa.md)
- [Getting Started with JavaScript in FA Model](start-with-js-fa.md)
- Development Fundamentals
- [Application Package Structure Configuration File (FA Model)](package-structure.md)
- [Application Package Structure Configuration File (Stage Model)](stage-structure.md)
- [SysCap](syscap.md)
- [HarmonyAppProvision Configuration File](app-provision-structure.md)
# Resource Categories and Access
## Resource Categories
Resource files used during application development must be stored in specified directories for management.
### resources Directory
The **resources** directory consists of three types of sub-directories: the **base** sub-directory, qualifiers sub-directories, and the **rawfile** sub-directory. The common resource files used across projects in the stage model are stored in the **resources** directory under **AppScope**.
Example of the **resources** directory:
```
resources
|---base // Default directory
| |---element
| | |---string.json
| |---media
| | |---icon.png
|---en_GB-vertical-car-mdpi // Example of a qualifiers sub-directory, which needs to be created on your own
| |---element
| | |---string.json
| |---media
| | |---icon.png
|---rawfile
```
**Table 1** Classification of the resources directory
| Category | base Sub-directory | Qualifiers Sub-directory | rawfile Sub-directory |
| ---- | ---------------------------------------- | ---------------------------------------- | ---------------------------------------- |
| Structure| The **base** sub-directory is a default directory. If no qualifiers sub-directories in the **resources** directory of the application match the device status, the resource file in the **base** sub-directory will be automatically referenced.<br>Resource group sub-directories are located at the second level of sub-directories to store basic elements such as strings, colors, and boolean values, as well as resource files such as media, animations, and layouts. For details, see [Resource Group Sub-directories](#resource-group-sub-directories).| You need to create qualifiers sub-directories on your own. Each directory name consists of one or more qualifiers that represent the application scenarios or device characteristics. For details, see [Qualifiers Sub-directories](#qualifiers-sub-directories).<br>Resource group sub-directories are located at the second level of sub-directories to store basic elements such as strings, colors, and boolean values, as well as resource files such as media, animations, and layouts. For details, see [Resource Group Sub-directories](#resource-group-sub-directories). | You can create multiple levels of sub-directories with custom directory names. They can be used to store various resource files.<br>However, resource files in the **rawfile** sub-directory will not be matched based on the device status.|
| Compilation| Resource files in the sub-directory are compiled into binary files, and each resource file is assigned an ID. | Resource files in the sub-directory are compiled into binary files, and each resource file is assigned an ID. | Resource files in the sub-directory are directly packed into the application without being compiled, and no IDs will be assigned to the resource files. |
| Reference| Resource files in the sub-directory are referenced based on the resource type and resource name. | Resource files in the sub-directory are referenced based on the resource type and resource name. | Resource files in the sub-directory are referenced based on the file path and file name. |
### Qualifiers Sub-directories
The name of a qualifiers sub-directory consists of one or more qualifiers that represent the application scenarios or device characteristics, covering the mobile country code (MCC), mobile network code (MNC), language, script, country or region, screen orientation, device type, night mode, and screen density. The qualifiers are separated using underscores (\_) or hyphens (\-). Before creating a qualifiers sub-directory, familiarize yourself with the directory naming conventions and the rules for matching qualifiers sub-directories and the device status.
**Naming Conventions for Qualifiers Sub-directories**
- Qualifiers are ordered in the following sequence: **\_MCC_MNC-language_script_country/region-orientation-device-color mode-density**. You can select one or multiple qualifiers to name your sub-directory based on your application scenarios and device characteristics.
- Separation between qualifiers: The language, script, and country/region qualifiers are separated using underscores (\_); the MNC and MCC qualifiers are also separated using underscores (\_); other qualifiers are separated using hyphens (\-). For example, **zh_Hant_CN** and **zh_CN-car-ldpi**.
- Value range of qualifiers: The value of each qualifier must meet the requirements specified in the following table. Otherwise, the resource files in the resources directory cannot be matched.
**Table 2** Requirements for qualifier values
| Qualifier Type | Description and Value Range |
| ----------- | ---------------------------------------- |
| MCC&MNC| Indicates the MCC and MNC, which are obtained from the network where the device is registered. The MCC can be either followed by the MNC with an underscore (\_) in between or be used independently. For example, **mcc460** indicates China, and **mcc460\_mnc00** indicates China\_China Mobile.<br>For details about the value range, refer to **ITU-T E.212** (the international identification plan for public networks and subscriptions).|
| Language | Indicates the language used by the device. The value consists of two or three lowercase letters. For example, **zh** indicates Chinese, **en** indicates English, and **mai** indicates Maithili.<br>For details about the value range, refer to **ISO 639** (codes for the representation of names of languages).|
| Text | Indicates the script type used by the device. The value starts with one uppercase letter followed by three lowercase letters. For example, **Hans** indicates simplified Chinese, and **Hant** indicates traditional Chinese.<br>For details about the value range, refer to **ISO 15924** (codes for the representation of names of scripts).|
| Country/Region | Indicates the country or region where the user is located. The value consists of two or three uppercase letters or three digits. For example, **CN** indicates China, and **GB** indicates the United Kingdom.<br>For details about the value range, refer to **ISO 3166-1** (codes for the representation of names of countries and their subdivisions).|
| Screen orientation | Indicates the screen orientation of the device. The value can be:<br>- **vertical**: portrait orientation<br>- **horizontal**: landscape orientation|
| Device type | Indicates the device type. The value can be:<br>- **car**: head unit<br>- **tv**: smart TV<br>- **wearable**: smart wearable|
| Color mode | Indicates the color mode of the device. The value can be:<br>- **dark**: dark mode<br>- **light**: light mode|
| Screen density | Indicates the screen density of the device, in dpi. The value can be:<br>- **sdpi**: screen density with small-scale dots per inch (SDPI). This value is applicable for devices with a DPI range of (0, 120].<br>- **mdpi**: medium-scale screen density (Medium-scale Dots Per Inch), applicable to DPI whose value is (120, 160] device.<br>- **ldpi**: screen density with large-scale dots per inch (LDPI). This value is applicable for devices with a DPI range of (160, 240].<br>- **xldpi**: screen density with extra-large-scale dots per inch (XLDPI). This value is applicable for devices with a DPI range of (240, 320].<br>- **xxldpi**: screen density with extra-extra-large-scale dots per inch (XXLDPI). This value is applicable for devices with a DPI range of (320, 480].<br>- **xxxldpi**: screen density with extra-extra-extra-large-scale dots per inch (XXXLDPI). This value is applicable for devices with a DPI range of (480, 640].|
**Rules for Matching Qualifiers Sub-directories and Device Resources**
- Qualifiers are matched with the device resources in the following priorities: MCC&MNC > locale (options: language, language_script, language_country/region, and language_script_country/region) > screen orientation > device type > color mode > screen density.
- If the qualifiers sub-directories contain the **MCC, MNC, language, script, screen orientation, device type, and color mode** qualifiers, their values must be consistent with the current device status so that the sub-directories can be used for matching the device resources. For example, the qualifiers sub-directory **zh_CN-car-ldpi** cannot be used for matching the resource files labeled **en_US**.
### Resource Group Sub-directories
You can create resource group sub-directories (including element, media, and profile) in the **base** and qualifiers sub-directories to store resource files of specific types.
**Table 3** Resource group sub-directories
| Resource Group Sub-directory | Description | Resource File |
| ------- | ---------------------------------------- | ---------------------------------------- |
| element | Indicates element resources. Each type of data is represented by a JSON file. The options are as follows:<br>- **boolean**: boolean data<br>- **color**: color data<br>- **float**: floating-point data<br>- **intarray**: array of integers<br>- **integer**: integer data<br>- **pattern**: pattern data<br>- **plural**: plural form data<br>- **strarray**: array of strings<br>- **string**: string data| It is recommended that files in the **element** sub-directory be named the same as the following files, each of which can contain only data of the same type:<br>- boolean.json<br>- color.json<br>- float.json<br>- intarray.json<br>- integer.json<br>- pattern.json<br>- plural.json<br>- strarray.json<br>- string.json |
| media | Indicates media resources, including non-text files such as images, audios, and videos. | The file name can be customized, for example, **icon.png**. |
| rawfile | Indicates other types of files, which are stored in their raw formats after the application is built as an HAP file. They will not be integrated into the **resources.index** file.| The file name can be customized. |
**Media Resource Types**
**Table 4** Image resource types
| Format | File Name Extension|
| ---- | ----- |
| JPEG | .jpg |
| PNG | .png |
| GIF | .gif |
| SVG | .svg |
| WEBP | .webp |
| BMP | .bmp |
**Table 5** Audio and video resource types
| Format | File Name Extension |
| ------------------------------------ | --------------- |
| H.263 | .3gp <br>.mp4 |
| H.264 AVC <br> Baseline Profile (BP) | .3gp <br>.mp4 |
| MPEG-4 SP | .3gp |
| VP8 | .webm <br> .mkv |
**Resource File Examples**
The content of the **color.json** file is as follows:
```json
{
"color": [
{
"name": "color_hello",
"value": "#ffff0000"
},
{
"name": "color_world",
"value": "#ff0000ff"
}
]
}
```
The content of the **float.json** file is as follows:
```json
{
"float":[
{
"name":"font_hello",
"value":"28.0fp"
},
{
"name":"font_world",
"value":"20.0fp"
}
]
}
```
The content of the **string.json** file is as follows:
```json
{
"string":[
{
"name":"string_hello",
"value":"Hello"
},
{
"name":"string_world",
"value":"World"
},
{
"name":"message_arrive",
"value":"We will arrive at %s."
}
]
}
```
The content of the **plural.json** file is as follows:
```json
{
"plural":[
{
"name":"eat_apple",
"value":[
{
"quantity":"one",
"value":"%d apple"
},
{
"quantity":"other",
"value":"%d apples"
}
]
}
]
}
```
## Resource Access
### Application Resources
**Creating a Resource File**
You can create a sub-directory and its files under the **resources** directory based on the preceding descriptions of the qualifiers sub-directories and resource group sub-directories.
DevEco Studio provides a wizard for you to create resource directories and resource files.
- Creating a Resource Directory and Resource File
Right-click the **resources** directory and choose **New > Resource File**.
If no qualifier is selected, the file is created in a resource type sub-directory under **base**. If one or more qualifiers are selected, the system automatically generates a sub-directory and creates the file in this sub-directory.
The created sub-directory is automatically named in the format of **Qualifiers.Resource type**. For example, if you create a sub-directory by setting **Orientation** to **Vertical** and **Resource type** to **Graphic**, the system automatically generates a sub-directory named **vertical.graphic**.
![create-resource-file-1](figures/create-resource-file-1.png)
- Creating a Resource Directory
Right-click the **resources** directory and choose **New > Resource Directory**. This operation creates a sub-directory only.
Select a resource group type and set qualifiers. Then the system automatically generates the sub-directory name. The sub-directory is automatically named in the format of **Qualifiers.Resource group**. For example, if you create a sub-directory by setting **Orientation** to **Vertical** and **Resource type** to **Graphic**, the system automatically generates a sub-directory named **vertical.graphic**.
![create-resource-file-2](figures/create-resource-file-2.png)
- Creating a Resource File
Right-click a sub-directory under **resources** and choose **New > *XXX* Resource File**. This operation creates a resource file under this sub-directory.
For example, you can create an element resource file in the **element** sub-directory.
![create-resource-file-3](figures/create-resource-file-3.png)
**Accessing Application Resources**
To reference an application resource in a project, use the **"$r('app.type.name')"** format. **app** indicates the resource defined in the **resources** directory of the application. **type** indicates the resource type (or the location where the resource is stored). The value can be **color**, **float**, **string**, **plural**, or **media**. **name** indicates the resource name, which you set when defining the resource.
When referencing resources in the **rawfile** sub-directory, use the **"$rawfile('filename')"** format. Wherein, **filename** indicates the relative path of a file in the **rawfile** directory, which must contain the file name extension in the file name and cannot start with a slash (/).
> **NOTE**
>
> Resource descriptors accept only strings, such as **'app.type.name'**, and cannot be combined.
>
> The return value of **$r** is a **Resource** object. You can obtain the corresponding string by using the [getStringValue](../reference/apis/js-apis-resource-manager.md) API.
In the **.ets** file, you can use the resources defined in the **resources** directory.
```ts
Text($r('app.string.string_hello'))
.fontColor($r('app.color.color_hello'))
.fontSize($r('app.float.font_hello'))
}
Text($r('app.string.string_world'))
.fontColor($r('app.color.color_world'))
.fontSize($r('app.float.font_world'))
}
Text($r('app.string.message_arrive', "five of the clock")) // Reference string resources. The second parameter of $r is used to replace %s.
.fontColor($r('app.color.color_hello'))
.fontSize($r('app.float.font_hello'))
}
Text($r('app.plural.eat_apple', 5, 5)) // Reference plural resources. The first parameter indicates the plural resource, the second parameter indicates the number of plural resources, and the third parameter indicates the substitute of %d.
.fontColor($r('app.color.color_world'))
.fontSize($r('app.float.font_world'))
}
Image($r('app.media.my_background_image')) // Reference media resources.
Image($rawfile('test.png')) // Reference an image in the rawfile directory.
Image($rawfile('newDir/newTest.png')) // Reference an image in the rawfile directory.
```
### System Resources
System resources include colors, rounded corners, fonts, spacing, character strings, and images. By using system resources, you can develop different applications with the same visual style.
To reference a system resource, use the **"$r('sys.type.resource_id')"** format. Wherein: **sys** indicates a system resource; **type** indicates the resource type, which can be **color**, **float**, **string**, or **media**; **resource_id** indicates the resource ID.
```ts
Text('Hello')
.fontColor($r('sys.color.ohos_id_color_emphasize'))
.fontSize($r('sys.float.ohos_id_text_size_headline1'))
.fontFamily($r('sys.string.ohos_id_text_font_family_medium'))
.backgroundColor($r('sys.color.ohos_id_color_palette_aux1'))
Image($r('sys.media.ohos_app_icon'))
.border({color: $r('sys.color.ohos_id_color_palette_aux1'), radius: $r('sys.float.ohos_id_corner_radius_button'), width: 2})
.margin({top: $r('sys.float.ohos_id_elements_margin_horizontal_m'), bottom: $r('sys.float.ohos_id_elements_margin_horizontal_l')})
.height(200)
.width(300)
```
# Preparations
# Before You Start
This document is intended for novices at developing OpenHarmony applications. It will introduce you to the OpenHarmony project directory structure and application development process, by walking you through a stripped-down, real-world example – building two pages and implementing redirection between them. The following figure shows how the pages look on the DevEco Studio Previewer.
......@@ -16,11 +16,11 @@ Before you begin, there are two basic concepts that will help you better underst
OpenHarmony provides a UI development framework, known as ArkUI. ArkUI provides a full range of capabilities you may need for application UI development, ranging from components to layout calculation, animation, UI interaction, and drawing capabilities.
ArkUI comes with two development paradigms: JavaScript-based web-like development paradigm (web-like development paradigm for short) and TypeScript-based declarative development paradigm (declarative development paradigm for short). You can choose whichever development paradigm that aligns with your practice.
ArkUI comes with two development paradigms: ArkTS-based declarative development paradigm (declarative development paradigm for short) and JavaScript-compatible web-like development paradigm (web-like development paradigm for short). You can choose whichever development paradigm that aligns with your practice.
| **Development Paradigm**| **Programming Language**| **UI Update Mode**| **Applicable To**| **Intended Audience**|
| -------- | -------- | -------- | -------- | -------- |
| Declarative development paradigm| Extended TypeScript (eTS)| Data-driven| Applications involving technological sophistication and teamwork| Mobile application and system application developers|
| Declarative development paradigm| ArkTS| Data-driven| Applications involving technological sophistication and teamwork| Mobile application and system application developers|
| Web-like development paradigm| JavaScript| Data-driven| Applications and service widgets with simple UIs| Frontend web developers|
For more details, see [UI Development](../ui/arkui-overview.md).
......@@ -36,7 +36,7 @@ The ability framework model has two forms:
- **Stage model**: introduced since API version 9. For details, see [Stage Model Overview](../ability/stage-brief.md).
The project directory structure of the FA model is different from that of the stage model. The stage model only works with the eTS programming language.
The project directory structure of the FA model is different from that of the stage model. The stage model only works with the ArkTS programming language.
For details about the differences between the FA model and stage model, see [Ability Framework Overview](../ability/ability-brief.md).
......@@ -45,8 +45,8 @@ This document provides an ability with two pages. For more information about abi
## Tool Preparation
1. Download the latest version of [DevEco Studio](https://developer.harmonyos.com/cn/develop/deveco-studio#download).
1. Download the latest version of [DevEco Studio](https://developer.harmonyos.com/cn/develop/deveco-studio).
2. Install DevEco Studio and configure the development environment. For details, see [Setting Up the Development Environment](https://developer.harmonyos.com/en/docs/documentation/doc-guides/ohos-setting-up-environment-0000001263160443).
When you are done, follow the instructions in [Getting Started with eTS in Stage Model](start-with-ets-stage.md), [Getting Started with eTS in FA Model](start-with-ets-fa.md), and [Getting Started with JavaScript in FA Model](start-with-js-fa.md).
When you are done, follow the instructions in [Getting Started with ArkTS in Stage Model](start-with-ets-stage.md), [Getting Started with ArkTS in FA Model](start-with-ets-fa.md), and [Getting Started with JavaScript in FA Model](start-with-js-fa.md).
......@@ -28,7 +28,7 @@ Triggered when the Work Scheduler task starts.
| Name | Type | Mandatory | Description |
| ---- | ---------------------------------------- | ---- | -------------- |
| work | [workScheduler.WorkInfo](js-apis-workScheduler.md#workinfo) | Yes | Target task. |
| work | [workScheduler.WorkInfo](js-apis-resourceschedule-workScheduler.md#workinfo) | Yes | Target task.|
**Example**
......@@ -52,7 +52,7 @@ Triggered when the Work Scheduler task stops.
| Name | Type | Mandatory | Description |
| ---- | ---------------------------------------- | ---- | -------------- |
| work | [workScheduler.WorkInfo](js-apis-workScheduler.md#workinfo) | Yes | Target task. |
| work | [workScheduler.WorkInfo](js-apis-resourceschedule-workScheduler.md#workinfo) | Yes | Target task.|
**Example**
......
......@@ -6,7 +6,9 @@ The **RemoteAbilityInfo** module provides information about a remote ability.
>
> The initial APIs of this module are supported since API version 8. Newly added APIs will be marked with a superscript to indicate their earliest API version.
## RemoteAbilityInfo
## RemoteAbilityInfo<sup>(deprecated)<sup>
> This API is deprecated since API version 9. You are advised to use [RemoteAbilityInfo](js-apis-bundleManager-remoteAbilityInfo.md) instead.
**System capability**: SystemCapability.BundleManager.DistributedBundleFramework
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册