提交 c9d1cd1b 编写于 作者: Y yudechen

Merge https://gitee.com/openharmony/docs into new

Change-Id: I400fd2c2841c52400b9806ccd75596be84280ebf

要显示的变更太多。

To preserve performance only 1000 of 1000+ files are displayed.
...@@ -155,9 +155,9 @@ zh-cn/application-dev/work-scheduler/ @HelloCrease ...@@ -155,9 +155,9 @@ zh-cn/application-dev/work-scheduler/ @HelloCrease
zh-cn/application-dev/internationalization/ @HelloCrease zh-cn/application-dev/internationalization/ @HelloCrease
zh-cn/application-dev/device/usb-overview.md @ge-yafang zh-cn/application-dev/device/usb-overview.md @ge-yafang
zh-cn/application-dev/device/usb-guidelines.md @ge-yafang zh-cn/application-dev/device/usb-guidelines.md @ge-yafang
zh-cn/application-dev/device/device-location-overview.md @zengyawen zh-cn/application-dev/device/device-location-overview.md @RayShih
zh-cn/application-dev/device/device-location-info.md @zengyawen zh-cn/application-dev/device/device-location-info.md @RayShih
zh-cn/application-dev/device/device-location-geocoding.md @zengyawen zh-cn/application-dev/device/device-location-geocoding.md @RayShih
zh-cn/application-dev/device/sensor-overview.md @HelloCrease zh-cn/application-dev/device/sensor-overview.md @HelloCrease
zh-cn/application-dev/device/sensor-guidelines.md @HelloCrease zh-cn/application-dev/device/sensor-guidelines.md @HelloCrease
zh-cn/application-dev/device/vibrator-overview.md @HelloCrease zh-cn/application-dev/device/vibrator-overview.md @HelloCrease
...@@ -181,7 +181,7 @@ zh-cn/application-dev/napi/drawing-guidelines.md @ge-yafang ...@@ -181,7 +181,7 @@ zh-cn/application-dev/napi/drawing-guidelines.md @ge-yafang
zh-cn/application-dev/napi/rawfile-guidelines.md @HelloCrease zh-cn/application-dev/napi/rawfile-guidelines.md @HelloCrease
zh-cn/application-dev/reference/js-service-widget-ui/ @HelloCrease zh-cn/application-dev/reference/js-service-widget-ui/ @HelloCrease
zh-cn/application-dev/faqs/ @zengyawen zh-cn/application-dev/faqs/ @zengyawen
zh-cn/application-dev/file-management/ @qinxiaowang zh-cn/application-dev/file-management/ @zengyawen
zh-cn/application-dev/application-test/ @HelloCrease zh-cn/application-dev/application-test/ @HelloCrease
zh-cn/application-dev/device-usage-statistics/ @HelloCrease zh-cn/application-dev/device-usage-statistics/ @HelloCrease
...@@ -212,7 +212,7 @@ zh-cn/application-dev/reference/apis/js-apis-audio.md @zengyawen ...@@ -212,7 +212,7 @@ zh-cn/application-dev/reference/apis/js-apis-audio.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-camera.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-camera.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-image.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-image.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-media.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-media.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-medialibrary.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-medialibrary.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-i18n.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-i18n.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-intl.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-intl.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-resource-manager.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-resource-manager.md @HelloCrease
...@@ -239,13 +239,13 @@ zh-cn/application-dev/reference/apis/js-apis-system-storage.md @ge-yafang ...@@ -239,13 +239,13 @@ zh-cn/application-dev/reference/apis/js-apis-system-storage.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-data-rdb.md @ge-yafang zh-cn/application-dev/reference/apis/js-apis-data-rdb.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-settings.md @ge-yafang zh-cn/application-dev/reference/apis/js-apis-settings.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-data-resultset.md @ge-yafang zh-cn/application-dev/reference/apis/js-apis-data-resultset.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-document.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-document.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-environment.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-environment.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-fileio.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-fileio.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-filemanager.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-filemanager.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-statfs.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-statfs.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-storage-statistics.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-storage-statistics.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-volumemanager.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-volumemanager.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-contact.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-contact.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-call.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-call.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-observer.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-observer.md @zengyawen
...@@ -302,38 +302,38 @@ zh-cn/application-dev/reference/apis/js-apis-vibrator.md @HelloCrease ...@@ -302,38 +302,38 @@ zh-cn/application-dev/reference/apis/js-apis-vibrator.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-appAccount.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-appAccount.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-distributed-account.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-distributed-account.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-osAccount.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-osAccount.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-convertxml.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-convertxml.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-process.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-process.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-uri.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-uri.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-url.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-url.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-util.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-util.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-arraylist.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-arraylist.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-deque.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-deque.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-hashmap.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-hashmap.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-hashset.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-hashset.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-lightweightmap.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-lightweightmap.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-lightweightset.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-lightweightset.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-linkedlist.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-linkedlist.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-list.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-list.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-plainarray.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-plainarray.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-queue.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-queue.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-stack.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-stack.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-treemap.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-treemap.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-treeset.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-treeset.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-vector.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-vector.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-worker.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-worker.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-xml.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-xml.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-testRunner.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-testRunner.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-resourceschedule-backgroundTaskManager.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-uitest.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-uitest.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-hisysevent.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-hisysevent.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-privacyManager.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-privacyManager.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-EnterpriseAdminExtensionAbility.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-EnterpriseAdminExtensionAbility.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-animator.md @HelloCrease @qieqiewl @tomatodevboy @niulihua zh-cn/application-dev/reference/apis/js-apis-animator.md @HelloCrease @qieqiewl @tomatodevboy @niulihua
zh-cn/application-dev/reference/apis/js-apis-uiappearance.md @HelloCrease @qieqiewl @tomatodevboy @niulihua
zh-cn/application-dev/reference/apis/js-apis-useriam-faceauth.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-useriam-faceauth.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-userfilemanager.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-userfilemanager.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-cryptoFramework.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-cryptoFramework.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-buffer.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-buffer.md @ge-yafang
zh-cn/application-dev/reference/apis/development-intro.md @zengyawen zh-cn/application-dev/reference/apis/development-intro.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-accessibility-extension-context.md @RayShih zh-cn/application-dev/reference/apis/js-apis-accessibility-extension-context.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-application-applicationContext.md @RayShih zh-cn/application-dev/reference/apis/js-apis-application-applicationContext.md @RayShih
...@@ -366,7 +366,7 @@ zh-cn/application-dev/reference/apis/js-apis-mouseevent.md @HelloCrease ...@@ -366,7 +366,7 @@ zh-cn/application-dev/reference/apis/js-apis-mouseevent.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-nfcController.md @RayShih zh-cn/application-dev/reference/apis/js-apis-nfcController.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-nfctech.md @RayShih zh-cn/application-dev/reference/apis/js-apis-nfctech.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-pointer.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-pointer.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-securityLabel.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-securityLabel.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-app.md @RayShih @shuaytao @wangzhen107 @inter515 zh-cn/application-dev/reference/apis/js-apis-system-app.md @RayShih @shuaytao @wangzhen107 @inter515
zh-cn/application-dev/reference/apis/js-apis-system-battery.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-system-battery.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-bluetooth.md @RayShih zh-cn/application-dev/reference/apis/js-apis-system-bluetooth.md @RayShih
...@@ -374,7 +374,7 @@ zh-cn/application-dev/reference/apis/js-apis-system-brightness.md @zengyawen ...@@ -374,7 +374,7 @@ zh-cn/application-dev/reference/apis/js-apis-system-brightness.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-configuration.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-system-configuration.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-system-device.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-system-device.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-fetch.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-system-fetch.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-file.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-system-file.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-location.md @RayShih zh-cn/application-dev/reference/apis/js-apis-system-location.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-system-mediaquery.md @HelloCrease @qieqiewl @tomatodevboy @niulihua zh-cn/application-dev/reference/apis/js-apis-system-mediaquery.md @HelloCrease @qieqiewl @tomatodevboy @niulihua
zh-cn/application-dev/reference/apis/js-apis-system-network.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-system-network.md @zengyawen
...@@ -390,8 +390,8 @@ zh-cn/application-dev/reference/apis/js-apis-accessibility-config.md @RayShih ...@@ -390,8 +390,8 @@ zh-cn/application-dev/reference/apis/js-apis-accessibility-config.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-Bundle-BundleStatusCallback.md @RayShih @shuaytao @wangzhen107 @inter515 zh-cn/application-dev/reference/apis/js-apis-Bundle-BundleStatusCallback.md @RayShih @shuaytao @wangzhen107 @inter515
zh-cn/application-dev/reference/apis/js-apis-bundle-PackInfo.md @RayShih @shuaytao @wangzhen107 @inter515 zh-cn/application-dev/reference/apis/js-apis-bundle-PackInfo.md @RayShih @shuaytao @wangzhen107 @inter515
zh-cn/application-dev/reference/apis/js-apis-enterpriseDeviceManager-DeviceSettingsManager.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-enterpriseDeviceManager-DeviceSettingsManager.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-fileAccess.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-fileAccess.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-fileExtensionInfo.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-fileExtensionInfo.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-net-ethernet.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-net-ethernet.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-net-policy.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-net-policy.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-net-sharing.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-net-sharing.md @zengyawen
...@@ -450,16 +450,6 @@ zh-cn/application-dev/reference/apis/js-apis-formprovider.md @RayShih @littlejer ...@@ -450,16 +450,6 @@ zh-cn/application-dev/reference/apis/js-apis-formprovider.md @RayShih @littlejer
zh-cn/application-dev/reference/apis/js-apis-inputmethod-extension-ability.md @ge-yafang zh-cn/application-dev/reference/apis/js-apis-inputmethod-extension-ability.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-inputmethod-extension-context.md @ge-yafang zh-cn/application-dev/reference/apis/js-apis-inputmethod-extension-context.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-inputmethod-subtype.md @ge-yafang zh-cn/application-dev/reference/apis/js-apis-inputmethod-subtype.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-inputmethod-framework.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-usb.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-datashare.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-colorspace-manager.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-display.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-distributed-data_object.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-distributedKVStore.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-pasteboard.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-preferences.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-window.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-application-quickFixManager.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen zh-cn/application-dev/reference/apis/js-apis-application-quickFixManager.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/apis/js-apis-missionManager.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen zh-cn/application-dev/reference/apis/js-apis-missionManager.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/apis/js-apis-particleAbility.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen zh-cn/application-dev/reference/apis/js-apis-particleAbility.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
...@@ -470,3 +460,62 @@ zh-cn/application-dev/reference/apis/js-apis-service-extension-ability.md @RaySh ...@@ -470,3 +460,62 @@ zh-cn/application-dev/reference/apis/js-apis-service-extension-ability.md @RaySh
zh-cn/application-dev/reference/apis/js-apis-service-extension-context.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen zh-cn/application-dev/reference/apis/js-apis-service-extension-context.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/apis/js-apis-wantAgent.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen zh-cn/application-dev/reference/apis/js-apis-wantAgent.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/errorcodes/errcode-ability.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-access-token.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-accessibility.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-account.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-animator.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-app-account.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-audio.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-avsession.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-backgroundTaskMgr.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-batteryStatistics.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-brightness.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-buffer.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-bundle.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-colorspace-manager.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-CommonEventService.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-containers.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-data-rdb.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-datashare.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-device-manager.md @qinxiaowang
zh-cn/application-dev/reference/errorcodes/errcode-DeviceUsageStatistics.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-display.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-distributed-dataObject.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-distributedKVStore.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-DistributedNotificationService.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-DistributedSchedule.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-enterpriseDeviceManager.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-faultlogger.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-filemanagement.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-geoLocationManager.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-hiappevent.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-hisysevent.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-hiviewdfx-hidebug.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-huks.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-i18n.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-inputmethod-framework.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-multimodalinput.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-nfc.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-pasteboard.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-power.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-preferences.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-promptAction.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-reminderAgentManager.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-request.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-resource-manager.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-router.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-rpc.md @qinxiaowang
zh-cn/application-dev/reference/errorcodes/errcode-runninglock.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-sensor.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-system-parameterV9.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-thermal.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-uitest.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-universal.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-update.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-usb.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-vibrator.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-webview.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-window.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-workScheduler.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-zlib.md @RayShih
# Legal Notices
**Copyright (c) 2020-2022 OpenAtom OpenHarmony. All rights reserved.**
## Copyright
All copyrights of the OpenHarmony documents are reserved by OpenAtom OpenHarmony.
The OpenHarmony documents are licensed under Creative Commons Attribution 4.0 International (CC BY 4.0). For easier understanding, you can visit [Creative Commons](https://creativecommons.org/licenses/by/4.0/) to get a human-readable summary of the license. For the complete content, see [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/legalcode).
## Trademarks and Permissions
No content provided in the OpenHarmony documentation shall be deemed as a grant of the approval or right to use any trademark, name, or logo of the OpenAtom Foundation and OpenAtom OpenHarmony. No third parties shall use any of the aforementioned trademarks, names, or logos in any way without explicit prior written permission of the OpenAtom Foundation.
## Disclaimer
The information in the OpenHarmony documents is subject to change without notice.
The OpenHarmony documents are provided without any express or implied warranty. In any case, the OpenAtom Foundation or the copyright owner is not liable for any direct or indirect loss arising from the use of the OpenHarmony documents, regardless of the cause or legal theory, even if the OpenHarmony documents have stated that there is a possibility of such loss.
<!--no_check-->
\ No newline at end of file
...@@ -150,7 +150,7 @@ Currently, the OpenHarmony community supports 17 types of development boards, wh ...@@ -150,7 +150,7 @@ Currently, the OpenHarmony community supports 17 types of development boards, wh
| System Type| Board Model| Chip Model| <div style="width:200pt">Function Description and Use Case</div> | Application Scenario| Code Repository | | System Type| Board Model| Chip Model| <div style="width:200pt">Function Description and Use Case</div> | Application Scenario| Code Repository |
| -------- | -------- | -------- | -------- | -------- | -------- | | -------- | -------- | -------- | -------- | -------- | -------- |
| Standard system| Runhe HH-SCDAYU200| RK3568 | <div style="width:200pt">Function description:<br>Bolstered by the Rockchip RK3568, the HH-SCDAYU200 development board integrates the dual-core GPU and efficient NPU. Its quad-core 64-bit Cortex-A55 processor uses the advanced 22 nm fabrication process and is clocked at up to 2.0 GHz. The board is packed with Bluetooth, Wi-Fi, audio, video, and camera features, with a wide range of expansion ports to accommodate various video input and outputs. It comes with dual GE auto-sensing RJ45 ports, so it can be used in multi-connectivity products, such as network video recorders (NVRs) and industrial gateways.<br>Use case:<br>[DAYU200 Use Case](device-dev/porting/porting-dayu200-on_standard-demo.md)</div> | Entertainment, easy travel, and smart home, such as kitchen hoods, ovens, and treadmills.| [device_soc_rockchip](https://gitee.com/openharmony/device_soc_rockchip)<br>[device_board_hihope](https://gitee.com/openharmony/device_board_hihope)<br>[vendor_hihope](https://gitee.com/openharmony/vendor_hihope) <br> | | Standard system| Runhe HH-SCDAYU200| RK3568 | <div style="width:200pt">Function description:<br>Bolstered by the Rockchip RK3568, the HH-SCDAYU200 development board integrates the dual-core GPU and efficient NPU. Its quad-core 64-bit Cortex-A55 processor uses the advanced 22 nm fabrication process and is clocked at up to 2.0 GHz. The board is packed with Bluetooth, Wi-Fi, audio, video, and camera features, with a wide range of expansion ports to accommodate various video input and outputs. It comes with dual GE auto-sensing RJ45 ports, so it can be used in multi-connectivity products, such as network video recorders (NVRs) and industrial gateways.</div> | Entertainment, easy travel, and smart home, such as kitchen hoods, ovens, and treadmills.| [device_soc_rockchip](https://gitee.com/openharmony/device_soc_rockchip)<br>[device_board_hihope](https://gitee.com/openharmony/device_board_hihope)<br>[vendor_hihope](https://gitee.com/openharmony/vendor_hihope) <br> |
| Small system| Hispark_Taurus | Hi3516DV300 | <div style="width:200pt">Function Description:<br>Hi3516D V300 is the next-generation system on chip (SoC) for smart HD IP cameras. It integrates the next-generation image signal processor (ISP), H.265 video compression encoder, and high-performance NNIE engine, and delivers high performance in terms of low bit rate, high image quality, intelligent processing and analysis, and low power consumption.</div> | Smart device with screens, such as refrigerators with screens and head units.| [device_soc_hisilicon](https://gitee.com/openharmony/device_soc_hisilicon)<br>[device_board_hisilicon](https://gitee.com/openharmony/device_board_hisilicon)<br>[vendor_hisilicon](https://gitee.com/openharmony/vendor_hisilicon) <br> | | Small system| Hispark_Taurus | Hi3516DV300 | <div style="width:200pt">Function Description:<br>Hi3516D V300 is the next-generation system on chip (SoC) for smart HD IP cameras. It integrates the next-generation image signal processor (ISP), H.265 video compression encoder, and high-performance NNIE engine, and delivers high performance in terms of low bit rate, high image quality, intelligent processing and analysis, and low power consumption.</div> | Smart device with screens, such as refrigerators with screens and head units.| [device_soc_hisilicon](https://gitee.com/openharmony/device_soc_hisilicon)<br>[device_board_hisilicon](https://gitee.com/openharmony/device_board_hisilicon)<br>[vendor_hisilicon](https://gitee.com/openharmony/vendor_hisilicon) <br> |
| Mini system| Multi-modal V200Z-R | BES2600 | <div style="width:200pt">Function description:<br>The multi-modal V200Z-R development board is a high-performance, multi-functional, and cost-effective AIoT SoC powered by the BES2600WM chip of Bestechnic. It integrates a quad-core ARM processor with a frequency of up to 1 GHz as well as dual-mode Wi-Fi and dual-mode Bluetooth. The board supports the 802.11 a/b/g/n/ and BT/BLE 5.2 standards. It is able to accommodate RAM of up to 42 MB and flash memory of up to 32 MB, and supports the MIPI display serial interface (DSI) and camera serial interface (CSI). It is applicable to various AIoT multi-modal VUI and GUI interaction scenarios.<br>Use case:<br>[Multi-modal V200Z-R Use Case](device-dev/porting/porting-bes2600w-on-minisystem-display-demo.md)</div> | Smart hardware, and smart devices with screens, such as speakers and watches.| [device_soc_bestechnic](https://gitee.com/openharmony/device_soc_bestechnic)<br>[device_board_fnlink](https://gitee.com/openharmony/device_board_fnlink)<br>[vendor_bestechnic](https://gitee.com/openharmony/vendor_bestechnic) <br> | | Mini system| Multi-modal V200Z-R | BES2600 | <div style="width:200pt">Function description:<br>The multi-modal V200Z-R development board is a high-performance, multi-functional, and cost-effective AIoT SoC powered by the BES2600WM chip of Bestechnic. It integrates a quad-core ARM processor with a frequency of up to 1 GHz as well as dual-mode Wi-Fi and dual-mode Bluetooth. The board supports the 802.11 a/b/g/n/ and BT/BLE 5.2 standards. It is able to accommodate RAM of up to 42 MB and flash memory of up to 32 MB, and supports the MIPI display serial interface (DSI) and camera serial interface (CSI). It is applicable to various AIoT multi-modal VUI and GUI interaction scenarios.<br>Use case:<br>[Multi-modal V200Z-R Use Case](device-dev/porting/porting-bes2600w-on-minisystem-display-demo.md)</div> | Smart hardware, and smart devices with screens, such as speakers and watches.| [device_soc_bestechnic](https://gitee.com/openharmony/device_soc_bestechnic)<br>[device_board_fnlink](https://gitee.com/openharmony/device_board_fnlink)<br>[vendor_bestechnic](https://gitee.com/openharmony/vendor_bestechnic) <br> |
......
...@@ -7,7 +7,7 @@ To ensure successful communications between the client and server, interfaces re ...@@ -7,7 +7,7 @@ To ensure successful communications between the client and server, interfaces re
![IDL-interface-description](./figures/IDL-interface-description.png) ![IDL-interface-description](./figures/IDL-interface-description.png)
IDL provides the following functions: **IDL provides the following functions:**
- Declares interfaces provided by system services for external systems, and based on the interface declaration, generates C, C++, JS, or TS code for inter-process communication (IPC) or remote procedure call (RPC) proxies and stubs during compilation. - Declares interfaces provided by system services for external systems, and based on the interface declaration, generates C, C++, JS, or TS code for inter-process communication (IPC) or remote procedure call (RPC) proxies and stubs during compilation.
...@@ -17,7 +17,7 @@ IDL provides the following functions: ...@@ -17,7 +17,7 @@ IDL provides the following functions:
![IPC-RPC-communication-model](./figures/IPC-RPC-communication-model.png) ![IPC-RPC-communication-model](./figures/IPC-RPC-communication-model.png)
IDL has the following advantages: **IDL has the following advantages:**
- Services are defined in the form of interfaces in IDL. Therefore, you do not need to focus on implementation details. - Services are defined in the form of interfaces in IDL. Therefore, you do not need to focus on implementation details.
...@@ -433,7 +433,7 @@ export default { ...@@ -433,7 +433,7 @@ export default {
console.log('ServiceAbility want:' + JSON.stringify(want)); console.log('ServiceAbility want:' + JSON.stringify(want));
console.log('ServiceAbility want name:' + want.bundleName) console.log('ServiceAbility want name:' + want.bundleName)
} catch(err) { } catch(err) {
console.log("ServiceAbility error:" + err) console.log('ServiceAbility error:' + err)
} }
console.info('ServiceAbility onConnect end'); console.info('ServiceAbility onConnect end');
return new IdlTestImp('connect'); return new IdlTestImp('connect');
...@@ -455,13 +455,13 @@ import featureAbility from '@ohos.ability.featureAbility'; ...@@ -455,13 +455,13 @@ import featureAbility from '@ohos.ability.featureAbility';
function callbackTestIntTransaction(result: number, ret: number): void { function callbackTestIntTransaction(result: number, ret: number): void {
if (result == 0 && ret == 124) { if (result == 0 && ret == 124) {
console.log("case 1 success "); console.log('case 1 success');
} }
} }
function callbackTestStringTransaction(result: number): void { function callbackTestStringTransaction(result: number): void {
if (result == 0) { if (result == 0) {
console.log("case 2 success "); console.log('case 2 success');
} }
} }
...@@ -472,17 +472,17 @@ var onAbilityConnectDone = { ...@@ -472,17 +472,17 @@ var onAbilityConnectDone = {
testProxy.testStringTransaction('hello', callbackTestStringTransaction); testProxy.testStringTransaction('hello', callbackTestStringTransaction);
}, },
onDisconnect:function (elementName) { onDisconnect:function (elementName) {
console.log("onDisconnectService onDisconnect"); console.log('onDisconnectService onDisconnect');
}, },
onFailed:function (code) { onFailed:function (code) {
console.log("onDisconnectService onFailed"); console.log('onDisconnectService onFailed');
} }
}; };
function connectAbility: void { function connectAbility: void {
let want = { let want = {
"bundleName":"com.example.myapplicationidl", bundleName: 'com.example.myapplicationidl',
"abilityName": "com.example.myapplicationidl.ServiceAbility" abilityName: 'com.example.myapplicationidl.ServiceAbility'
}; };
let connectionId = -1; let connectionId = -1;
connectionId = featureAbility.connectAbility(want, onAbilityConnectDone); connectionId = featureAbility.connectAbility(want, onAbilityConnectDone);
...@@ -495,7 +495,7 @@ function connectAbility: void { ...@@ -495,7 +495,7 @@ function connectAbility: void {
You can send a class from one process to another through IPC interfaces. However, you must ensure that the peer can use the code of this class and this class supports the **marshalling** and **unmarshalling** methods. OpenHarmony uses **marshalling** and **unmarshalling** to serialize and deserialize objects into objects that can be identified by each process. You can send a class from one process to another through IPC interfaces. However, you must ensure that the peer can use the code of this class and this class supports the **marshalling** and **unmarshalling** methods. OpenHarmony uses **marshalling** and **unmarshalling** to serialize and deserialize objects into objects that can be identified by each process.
To create a class that supports the sequenceable type, perform the following operations: **To create a class that supports the sequenceable type, perform the following operations:**
1. Implement the **marshalling** method, which obtains the current state of the object and serializes the object into a **Parcel** object. 1. Implement the **marshalling** method, which obtains the current state of the object and serializes the object into a **Parcel** object.
2. Implement the **unmarshalling** method, which deserializes the object from a **Parcel** object. 2. Implement the **unmarshalling** method, which deserializes the object from a **Parcel** object.
...@@ -595,7 +595,7 @@ export default class IdlTestServiceProxy implements IIdlTestService { ...@@ -595,7 +595,7 @@ export default class IdlTestServiceProxy implements IIdlTestService {
let _reply = new rpc.MessageParcel(); let _reply = new rpc.MessageParcel();
_data.writeInt(data); _data.writeInt(data);
this.proxy.sendRequest(IdlTestServiceProxy.COMMAND_TEST_INT_TRANSACTION, _data, _reply, _option).then(function(result) { this.proxy.sendRequest(IdlTestServiceProxy.COMMAND_TEST_INT_TRANSACTION, _data, _reply, _option).then(function(result) {
if (result.errCode === 0) { if (result.errCode == 0) {
let _errCode = result.reply.readInt(); let _errCode = result.reply.readInt();
if (_errCode != 0) { if (_errCode != 0) {
let _returnValue = undefined; let _returnValue = undefined;
...@@ -605,7 +605,7 @@ export default class IdlTestServiceProxy implements IIdlTestService { ...@@ -605,7 +605,7 @@ export default class IdlTestServiceProxy implements IIdlTestService {
let _returnValue = result.reply.readInt(); let _returnValue = result.reply.readInt();
callback(_errCode, _returnValue); callback(_errCode, _returnValue);
} else { } else {
console.log("sendRequest failed, errCode: " + result.errCode); console.log('sendRequest failed, errCode: ' + result.errCode);
} }
}) })
} }
...@@ -617,11 +617,11 @@ export default class IdlTestServiceProxy implements IIdlTestService { ...@@ -617,11 +617,11 @@ export default class IdlTestServiceProxy implements IIdlTestService {
let _reply = new rpc.MessageParcel(); let _reply = new rpc.MessageParcel();
_data.writeString(data); _data.writeString(data);
this.proxy.sendRequest(IdlTestServiceProxy.COMMAND_TEST_STRING_TRANSACTION, _data, _reply, _option).then(function(result) { this.proxy.sendRequest(IdlTestServiceProxy.COMMAND_TEST_STRING_TRANSACTION, _data, _reply, _option).then(function(result) {
if (result.errCode === 0) { if (result.errCode == 0) {
let _errCode = result.reply.readInt(); let _errCode = result.reply.readInt();
callback(_errCode); callback(_errCode);
} else { } else {
console.log("sendRequest failed, errCode: " + result.errCode); console.log('sendRequest failed, errCode: ' + result.errCode);
} }
}) })
} }
...@@ -644,12 +644,12 @@ import nativeMgr from 'nativeManager'; ...@@ -644,12 +644,12 @@ import nativeMgr from 'nativeManager';
function testIntTransactionCallback(errCode: number, returnValue: number) function testIntTransactionCallback(errCode: number, returnValue: number)
{ {
console.log("errCode: " + errCode + " returnValue: " + returnValue); console.log('errCode: ' + errCode + ' returnValue: ' + returnValue);
} }
function testStringTransactionCallback(errCode: number) function testStringTransactionCallback(errCode: number)
{ {
console.log("errCode: " + errCode); console.log('errCode: ' + errCode);
} }
function jsProxyTriggerCppStub() function jsProxyTriggerCppStub()
...@@ -660,6 +660,6 @@ function jsProxyTriggerCppStub() ...@@ -660,6 +660,6 @@ function jsProxyTriggerCppStub()
tsProxy.testIntTransaction(10, testIntTransactionCallback); tsProxy.testIntTransaction(10, testIntTransactionCallback);
// Call testStringTransaction. // Call testStringTransaction.
tsProxy.testStringTransaction("test", testIntTransactionCallback); tsProxy.testStringTransaction('test', testIntTransactionCallback);
} }
``` ```
...@@ -8,14 +8,13 @@ ...@@ -8,14 +8,13 @@
- Quick Start - Quick Start
- Getting Started - Getting Started
- [Preparations](quick-start/start-overview.md) - [Preparations](quick-start/start-overview.md)
- [Getting Started with eTS in Stage Model](quick-start/start-with-ets-stage.md) - [Getting Started with ArkTS in Stage Model](quick-start/start-with-ets-stage.md)
- [Getting Started with eTS in FA Model](quick-start/start-with-ets-fa.md) - [Getting Started with ArkTS in FA Model](quick-start/start-with-ets-fa.md)
- [Getting Started with JavaScript in FA Model](quick-start/start-with-js-fa.md) - [Getting Started with JavaScript in FA Model](quick-start/start-with-js-fa.md)
- Development Fundamentals - Development Fundamentals
- [Application Package Structure Configuration File (FA Model)](quick-start/package-structure.md) - [Application Package Structure Configuration File (FA Model)](quick-start/package-structure.md)
- [Application Package Structure Configuration File (Stage Model)](quick-start/stage-structure.md) - [Application Package Structure Configuration File (Stage Model)](quick-start/stage-structure.md)
- [SysCap](quick-start/syscap.md) - [SysCap](quick-start/syscap.md)
- [HarmonyAppProvision Configuration File](quick-start/app-provision-structure.md)
- Development - Development
- [Ability Development](ability/Readme-EN.md) - [Ability Development](ability/Readme-EN.md)
......
# Ability Development # Ability Development
- [Ability Framework Overview](ability-brief.md) - [Ability Framework Overview](ability-brief.md)
- [Context Usage](context-userguide.md) - [Context Usage](context-userguide.md)
- FA Model - FA Model
...@@ -19,5 +20,3 @@ ...@@ -19,5 +20,3 @@
- [Ability Assistant Usage](ability-assistant-guidelines.md) - [Ability Assistant Usage](ability-assistant-guidelines.md)
- [ContinuationManager Development](continuationmanager.md) - [ContinuationManager Development](continuationmanager.md)
- [Test Framework Usage](ability-delegator.md) - [Test Framework Usage](ability-delegator.md)
...@@ -96,13 +96,13 @@ Obtain the context by calling **context.getApplicationContext()** in **Ability** ...@@ -96,13 +96,13 @@ Obtain the context by calling **context.getApplicationContext()** in **Ability**
**Example** **Example**
```javascript ```javascript
import AbilityStage from "@ohos.application.AbilityStage"; import Ability from "@ohos.application.Ability";
var lifecycleid; var lifecycleid;
export default class MyAbilityStage extends AbilityStage { export default class MainAbility extends Ability {
onCreate() { onCreate() {
console.log("MyAbilityStage onCreate") console.log("MainAbility onCreate")
let AbilityLifecycleCallback = { let AbilityLifecycleCallback = {
onAbilityCreate(ability){ onAbilityCreate(ability){
console.log("AbilityLifecycleCallback onAbilityCreate ability:" + JSON.stringify(ability)); console.log("AbilityLifecycleCallback onAbilityCreate ability:" + JSON.stringify(ability));
...@@ -141,11 +141,11 @@ export default class MyAbilityStage extends AbilityStage { ...@@ -141,11 +141,11 @@ export default class MyAbilityStage extends AbilityStage {
// 2. Use applicationContext to register and listen for the ability lifecycle in the application. // 2. Use applicationContext to register and listen for the ability lifecycle in the application.
lifecycleid = applicationContext.registerAbilityLifecycleCallback(AbilityLifecycleCallback); lifecycleid = applicationContext.registerAbilityLifecycleCallback(AbilityLifecycleCallback);
console.log("registerAbilityLifecycleCallback number: " + JSON.stringify(lifecycleid)); console.log("registerAbilityLifecycleCallback number: " + JSON.stringify(lifecycleid));
} },
onDestroy() { onDestroy() {
let applicationContext = this.context.getApplicationContext(); let applicationContext = this.context.getApplicationContext();
applicationContext.unregisterAbilityLifecycleCallback(lifecycleid, (error, data) => { applicationContext.unregisterAbilityLifecycleCallback(lifecycleid, (error, data) => {
console.log("unregisterAbilityLifecycleCallback success, err: " + JSON.stringify(error)); console.log("unregisterAbilityLifecycleCallback success, err: " + JSON.stringify(error));
}); });
} }
} }
...@@ -211,7 +211,13 @@ export default class MainAbility extends Ability { ...@@ -211,7 +211,13 @@ export default class MainAbility extends Ability {
let context = this.context; let context = this.context;
console.log("[Demo] MainAbility bundleName " + context.abilityInfo.bundleName) console.log("[Demo] MainAbility bundleName " + context.abilityInfo.bundleName)
windowStage.setUIContent(this.context, "pages/index", null) windowStage.loadContent("pages/index", (err, data) => {
if (err.code) {
console.error('Failed to load the content. Cause:' + JSON.stringify(err));
return;
}
console.info('Succeeded in loading the content. Data: ' + JSON.stringify(data))
});
} }
onWindowStageDestroy() { onWindowStageDestroy() {
...@@ -237,7 +243,7 @@ For details, see [FormExtensionContext](../reference/apis/js-apis-formextensionc ...@@ -237,7 +243,7 @@ For details, see [FormExtensionContext](../reference/apis/js-apis-formextensionc
### Obtaining the Context on an ArkTS Page ### Obtaining the Context on an ArkTS Page
In the stage model, in the `onWindowStageCreate` lifecycle of an ability, you can call `SetUIContent` of `WindowStage` to load an ArkTS page. In some scenarios, you need to obtain the context on the page to call related APIs. In the stage model, in the onWindowStageCreate lifecycle of an ability, you can call **SetUIContent** of **WindowStage** to load an ArkTS page. In some scenarios, you need to obtain the context on the page to call related APIs.
**How to Obtain** **How to Obtain**
...@@ -245,7 +251,7 @@ Use the API described in the table below to obtain the context associated with a ...@@ -245,7 +251,7 @@ Use the API described in the table below to obtain the context associated with a
| API | Description | | API | Description |
| :------------------------------------ | :--------------------------- | | :------------------------------------ | :--------------------------- |
| getContext(component: Object): Object | Obtains the `Context` object associated with a component on the page.| | getContext(component: Object): Object | Obtains the **Context** object associated with a component on the page.|
**Example** **Example**
......
...@@ -14,20 +14,20 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -14,20 +14,20 @@ As the entry of the ability continuation capability, **continuationManager** is
## Available APIs ## Available APIs
| API | Description| | API | Description|
| ---------------------------------------------------------------------------------------------- | ----------- | | ---------------------------------------------------------------------------------------------- | ----------- |
| register(callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API does not involve any filter parameters and uses an asynchronous callback to return the result.| | registerContinuation(callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API does not involve any filter parameters and uses an asynchronous callback to return the result.|
| register(options: ContinuationExtraParams, callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API uses an asynchronous callback to return the result.| | registerContinuation(options: ContinuationExtraParams, callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API uses an asynchronous callback to return the result.|
| register(options?: ContinuationExtraParams): Promise\<number> | Registers the continuation management service and obtains a token. This API uses a promise to return the result.| | registerContinuation(options?: ContinuationExtraParams): Promise\<number> | Registers the continuation management service and obtains a token. This API uses a promise to return the result.|
| on(type: "deviceConnect", token: number, callback: Callback\<Array\<ContinuationResult>>): void | Subscribes to device connection events. This API uses an asynchronous callback to return the result.| | on(type: "deviceSelected", token: number, callback: Callback\<Array\<ContinuationResult>>): void | Subscribes to device connection events. This API uses an asynchronous callback to return the result.|
| on(type: "deviceDisconnect", token: number, callback: Callback\<Array\<string>>): void | Subscribes to device disconnection events. This API uses an asynchronous callback to return the result.| | on(type: "deviceUnselected", token: number, callback: Callback\<Array\<ContinuationResult>>): void | Subscribes to device disconnection events. This API uses an asynchronous callback to return the result.|
| off(type: "deviceConnect", token: number): void | Unsubscribes from device connection events.| | off(type: "deviceSelected", token: number): void | Unsubscribes from device connection events.|
| off(type: "deviceDisconnect", token: number): void | Unsubscribes from device disconnection events.| | off(type: "deviceUnselected", token: number): void | Unsubscribes from device disconnection events.|
| startDeviceManager(token: number, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API does not involve any filter parameters and uses an asynchronous callback to return the result.| | startContinuationDeviceManager(token: number, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API does not involve any filter parameters and uses an asynchronous callback to return the result.|
| startDeviceManager(token: number, options: ContinuationExtraParams, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API uses an asynchronous callback to return the result.| | startContinuationDeviceManager(token: number, options: ContinuationExtraParams, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API uses an asynchronous callback to return the result.|
| startDeviceManager(token: number, options?: ContinuationExtraParams): Promise\<void> | Starts the device selection module to show the list of available devices. This API uses a promise to return the result.| | startContinuationDeviceManager(token: number, options?: ContinuationExtraParams): Promise\<void> | Starts the device selection module to show the list of available devices. This API uses a promise to return the result.|
| updateConnectStatus(token: number, deviceId: string, status: DeviceConnectState, callback: AsyncCallback\<void>): void | Instructs the device selection module to update the device connection state. This API uses an asynchronous callback to return the result.| | updateContinuationState(token: number, deviceId: string, status: DeviceConnectState, callback: AsyncCallback\<void>): void | Instructs the device selection module to update the device connection state. This API uses an asynchronous callback to return the result.|
| updateConnectStatus(token: number, deviceId: string, status: DeviceConnectState): Promise\<void> | Instructs the device selection module to update the device connection state. This API uses a promise to return the result.| | updateContinuationState(token: number, deviceId: string, status: DeviceConnectState): Promise\<void> | Instructs the device selection module to update the device connection state. This API uses a promise to return the result.|
| unregister(token: number, callback: AsyncCallback\<void>): void | Deregisters the continuation management service. This API uses an asynchronous callback to return the result.| | unregisterContinuation(token: number, callback: AsyncCallback\<void>): void | Deregisters the continuation management service. This API uses an asynchronous callback to return the result.|
| unregister(token: number): Promise\<void> | Deregisters the continuation management service. This API uses a promise to return the result.| | unregisterContinuation(token: number): Promise\<void> | Deregisters the continuation management service. This API uses a promise to return the result.|
## How to Develop ## How to Develop
1. Import the **continuationManager** module. 1. Import the **continuationManager** module.
...@@ -36,7 +36,7 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -36,7 +36,7 @@ As the entry of the ability continuation capability, **continuationManager** is
import continuationManager from '@ohos.continuation.continuationManager'; import continuationManager from '@ohos.continuation.continuationManager';
``` ```
2. Apply for permissions required for cross-device continuation or collaboration operations. 2. Apply for the **DISTRIBUTED_DATASYNC** permission.
The permission application operation varies according to the ability model in use. In the FA mode, add the required permission in the `config.json` file, as follows: The permission application operation varies according to the ability model in use. In the FA mode, add the required permission in the `config.json` file, as follows:
...@@ -57,6 +57,7 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -57,6 +57,7 @@ As the entry of the ability continuation capability, **continuationManager** is
```ts ```ts
import abilityAccessCtrl from "@ohos.abilityAccessCtrl"; import abilityAccessCtrl from "@ohos.abilityAccessCtrl";
import bundle from '@ohos.bundle'; import bundle from '@ohos.bundle';
import featureAbility from '@ohos.ability.featureAbility';
async function requestPermission() { async function requestPermission() {
let permissions: Array<string> = [ let permissions: Array<string> = [
...@@ -124,7 +125,8 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -124,7 +125,8 @@ As the entry of the ability continuation capability, **continuationManager** is
// If the permission is not granted, call requestPermissionsFromUser to apply for the permission. // If the permission is not granted, call requestPermissionsFromUser to apply for the permission.
if (needGrantPermission) { if (needGrantPermission) {
try { try {
await globalThis.abilityContext.requestPermissionsFromUser(permissions); // globalThis.context is Ability.context, which must be assigned a value in the MainAbility.ts file in advance.
await globalThis.context.requestPermissionsFromUser(permissions);
} catch (err) { } catch (err) {
console.error('app permission request permissions error' + JSON.stringify(err)); console.error('app permission request permissions error' + JSON.stringify(err));
} }
...@@ -140,13 +142,16 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -140,13 +142,16 @@ As the entry of the ability continuation capability, **continuationManager** is
```ts ```ts
let token: number = -1; // Used to save the token returned after the registration. The token will be used when listening for device connection/disconnection events, starting the device selection module, and updating the device connection state. let token: number = -1; // Used to save the token returned after the registration. The token will be used when listening for device connection/disconnection events, starting the device selection module, and updating the device connection state.
try {
continuationManager.register().then((data) => { continuationManager.registerContinuation().then((data) => {
console.info('register finished, ' + JSON.stringify(data)); console.info('registerContinuation finished, ' + JSON.stringify(data));
token = data; // Obtain a token and assign a value to the token variable. token = data; // Obtain a token and assign a value to the token variable.
}).catch((err) => { }).catch((err) => {
console.error('register failed, cause: ' + JSON.stringify(err)); console.error('registerContinuation failed, cause: ' + JSON.stringify(err));
}); });
} catch (err) {
console.error('registerContinuation failed, cause: ' + JSON.stringify(err));
}
``` ```
4. Listen for the device connection/disconnection state. 4. Listen for the device connection/disconnection state.
...@@ -156,28 +161,31 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -156,28 +161,31 @@ As the entry of the ability continuation capability, **continuationManager** is
```ts ```ts
let remoteDeviceId: string = ""; // Used to save the information about the remote device selected by the user, which will be used for cross-device continuation or collaboration. let remoteDeviceId: string = ""; // Used to save the information about the remote device selected by the user, which will be used for cross-device continuation or collaboration.
// The token parameter is the token obtained during the registration. try {
continuationManager.on("deviceConnect", token, (continuationResults) => { // The token parameter is the token obtained during the registration.
console.info('registerDeviceConnectCallback len: ' + continuationResults.length); continuationManager.on("deviceSelected", token, (continuationResults) => {
if (continuationResults.length <= 0) { console.info('registerDeviceSelectedCallback len: ' + continuationResults.length);
console.info('no selected device'); if (continuationResults.length <= 0) {
return; console.info('no selected device');
} return;
remoteDeviceId = continuationResults[0].id; // Assign the deviceId of the first selected remote device to the remoteDeviceId variable. }
remoteDeviceId = continuationResults[0].id; // Assign the deviceId of the first selected remote device to the remoteDeviceId variable.
// Pass the remoteDeviceId parameter to want.
let want = { // Pass the remoteDeviceId parameter to want.
deviceId: remoteDeviceId, let want = {
bundleName: 'ohos.samples.continuationmanager', deviceId: remoteDeviceId,
abilityName: 'MainAbility' bundleName: 'ohos.samples.continuationmanager',
}; abilityName: 'MainAbility'
// To initiate multi-device collaboration, you must obtain the ohos.permission.DISTRIBUTED_DATASYNC permission. };
globalThis.abilityContext.startAbility(want).then((data) => { globalThis.abilityContext.startAbility(want).then((data) => {
console.info('StartRemoteAbility finished, ' + JSON.stringify(data)); console.info('StartRemoteAbility finished, ' + JSON.stringify(data));
}).catch((err) => { }).catch((err) => {
console.error('StartRemoteAbility failed, cause: ' + JSON.stringify(err)); console.error('StartRemoteAbility failed, cause: ' + JSON.stringify(err));
});
}); });
}); } catch (err) {
console.error('on failed, cause: ' + JSON.stringify(err));
}
``` ```
The preceding multi-device collaboration operation is performed across devices in the stage model. For details about this operation in the FA model, see [Page Ability Development](https://gitee.com/openharmony/docs/blob/master/en/application-dev/ability/fa-pageability.md). The preceding multi-device collaboration operation is performed across devices in the stage model. For details about this operation in the FA model, see [Page Ability Development](https://gitee.com/openharmony/docs/blob/master/en/application-dev/ability/fa-pageability.md).
...@@ -189,35 +197,43 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -189,35 +197,43 @@ As the entry of the ability continuation capability, **continuationManager** is
let deviceConnectStatus: continuationManager.DeviceConnectState = continuationManager.DeviceConnectState.CONNECTED; let deviceConnectStatus: continuationManager.DeviceConnectState = continuationManager.DeviceConnectState.CONNECTED;
// The token parameter is the token obtained during the registration, and the remoteDeviceId parameter is the remoteDeviceId obtained. // The token parameter is the token obtained during the registration, and the remoteDeviceId parameter is the remoteDeviceId obtained.
continuationManager.updateConnectStatus(token, remoteDeviceId, deviceConnectStatus).then((data) => { try {
console.info('updateConnectStatus finished, ' + JSON.stringify(data)); continuationManager.updateContinuationState(token, remoteDeviceId, deviceConnectStatus).then((data) => {
}).catch((err) => { console.info('updateContinuationState finished, ' + JSON.stringify(data));
console.error('updateConnectStatus failed, cause: ' + JSON.stringify(err)); }).catch((err) => {
}); console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
});
} catch (err) {
console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
}
``` ```
Listen for the device disconnection state so that the user can stop cross-device continuation or collaboration in time. The sample code is as follows: Listen for the device disconnection state so that the user can stop cross-device continuation or collaboration in time. The sample code is as follows:
```ts ```ts
// The token parameter is the token obtained during the registration. try {
continuationManager.on("deviceDisconnect", token, (deviceIds) => { // The token parameter is the token obtained during the registration.
console.info('onDeviceDisconnect len: ' + deviceIds.length); continuationManager.on("deviceUnselected", token, (continuationResults) => {
if (deviceIds.length <= 0) { console.info('onDeviceUnselected len: ' + continuationResults.length);
console.info('no unselected device'); if (continuationResults.length <= 0) {
return; console.info('no unselected device');
} return;
}
// Update the device connection state. // Update the device connection state.
let unselectedDeviceId: string = deviceIds[0]; // Assign the deviceId of the first deselected remote device to the unselectedDeviceId variable. let unselectedDeviceId: string = continuationResults[0].id; // Assign the deviceId of the first deselected remote device to the unselectedDeviceId variable.
let deviceConnectStatus: continuationManager.DeviceConnectState = continuationManager.DeviceConnectState.DISCONNECTING; // Device disconnected. let deviceConnectStatus: continuationManager.DeviceConnectState = continuationManager.DeviceConnectState.DISCONNECTING; // Device disconnected.
// The token parameter is the token obtained during the registration, and the unselectedDeviceId parameter is the unselectedDeviceId obtained. // The token parameter is the token obtained during the registration, and the unselectedDeviceId parameter is the unselectedDeviceId obtained.
continuationManager.updateConnectStatus(token, unselectedDeviceId, deviceConnectStatus).then((data) => { continuationManager.updateContinuationState(token, unselectedDeviceId, deviceConnectStatus).then((data) => {
console.info('updateConnectStatus finished, ' + JSON.stringify(data)); console.info('updateContinuationState finished, ' + JSON.stringify(data));
}).catch((err) => { }).catch((err) => {
console.error('updateConnectStatus failed, cause: ' + JSON.stringify(err)); console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
});
}); });
}); } catch (err) {
console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
}
``` ```
5. Start the device selection module to show the list of available devices on the network. 5. Start the device selection module to show the list of available devices on the network.
...@@ -231,12 +247,16 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -231,12 +247,16 @@ As the entry of the ability continuation capability, **continuationManager** is
continuationMode: continuationManager.ContinuationMode.COLLABORATION_SINGLE // Single-choice mode of the device selection module. continuationMode: continuationManager.ContinuationMode.COLLABORATION_SINGLE // Single-choice mode of the device selection module.
}; };
// The token parameter is the token obtained during the registration. try {
continuationManager.startDeviceManager(token, continuationExtraParams).then((data) => { // The token parameter is the token obtained during the registration.
console.info('startDeviceManager finished, ' + JSON.stringify(data)); continuationManager.startContinuationDeviceManager(token, continuationExtraParams).then((data) => {
}).catch((err) => { console.info('startContinuationDeviceManager finished, ' + JSON.stringify(data));
console.error('startDeviceManager failed, cause: ' + JSON.stringify(err)); }).catch((err) => {
}); console.error('startContinuationDeviceManager failed, cause: ' + JSON.stringify(err));
});
} catch (err) {
console.error('startContinuationDeviceManager failed, cause: ' + JSON.stringify(err));
}
``` ```
6. If you do not need to perform cross-device migration or collaboration operations, you can deregister the continuation management service, by passing the token obtained during the registration. 6. If you do not need to perform cross-device migration or collaboration operations, you can deregister the continuation management service, by passing the token obtained during the registration.
...@@ -244,10 +264,14 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -244,10 +264,14 @@ As the entry of the ability continuation capability, **continuationManager** is
The sample code is as follows: The sample code is as follows:
```ts ```ts
// The token parameter is the token obtained during the registration. try {
continuationManager.unregister(token).then((data) => { // The token parameter is the token obtained during the registration.
console.info('unregister finished, ' + JSON.stringify(data)); continuationManager.unregisterContinuation(token).then((data) => {
}).catch((err) => { console.info('unregisterContinuation finished, ' + JSON.stringify(data));
console.error('unregister failed, cause: ' + JSON.stringify(err)); }).catch((err) => {
}); console.error('unregisterContinuation failed, cause: ' + JSON.stringify(err));
});
} catch (err) {
console.error('unregisterContinuation failed, cause: ' + JSON.stringify(err));
}
``` ```
...@@ -32,19 +32,19 @@ Example URIs: ...@@ -32,19 +32,19 @@ Example URIs:
**Table 1** Data ability lifecycle APIs **Table 1** Data ability lifecycle APIs
|API|Description| |API|Description|
|:------|:------| |:------|:------|
|onInitialized?(info: AbilityInfo): void|Called during ability initialization to initialize the relational database (RDB).| |onInitialized(info: AbilityInfo): void|Called during ability initialization to initialize the relational database (RDB).|
|update?(uri: string, valueBucket: rdb.ValuesBucket, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<number>): void|Updates data in the database.| |update(uri: string, valueBucket: rdb.ValuesBucket, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<number>): void|Updates data in the database.|
|query?(uri: string, columns: Array\<string>, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<ResultSet>): void|Queries data in the database.| |query(uri: string, columns: Array\<string>, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<ResultSet>): void|Queries data in the database.|
|delete?(uri: string, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<number>): void|Deletes one or more data records from the database.| |delete(uri: string, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<number>): void|Deletes one or more data records from the database.|
|normalizeUri?(uri: string, callback: AsyncCallback\<string>): void|Normalizes the URI. A normalized URI applies to cross-device use, persistence, backup, and restore. When the context changes, it ensures that the same data item can be referenced.| |normalizeUri(uri: string, callback: AsyncCallback\<string>): void|Normalizes the URI. A normalized URI applies to cross-device use, persistence, backup, and restore. When the context changes, it ensures that the same data item can be referenced.|
|batchInsert?(uri: string, valueBuckets: Array\<rdb.ValuesBucket>, callback: AsyncCallback\<number>): void|Inserts multiple data records into the database.| |batchInsert(uri: string, valueBuckets: Array\<rdb.ValuesBucket>, callback: AsyncCallback\<number>): void|Inserts multiple data records into the database.|
|denormalizeUri?(uri: string, callback: AsyncCallback\<string>): void|Converts a normalized URI generated by **normalizeUri** into a denormalized URI.| |denormalizeUri(uri: string, callback: AsyncCallback\<string>): void|Converts a normalized URI generated by **normalizeUri** into a denormalized URI.|
|insert?(uri: string, valueBucket: rdb.ValuesBucket, callback: AsyncCallback\<number>): void|Inserts a data record into the database.| |insert(uri: string, valueBucket: rdb.ValuesBucket, callback: AsyncCallback\<number>): void|Inserts a data record into the database.|
|openFile?(uri: string, mode: string, callback: AsyncCallback\<number>): void|Opens a file.| |openFile(uri: string, mode: string, callback: AsyncCallback\<number>): void|Opens a file.|
|getFileTypes?(uri: string, mimeTypeFilter: string, callback: AsyncCallback\<Array\<string>>): void|Obtains the MIME type of a file.| |getFileTypes(uri: string, mimeTypeFilter: string, callback: AsyncCallback\<Array\<string>>): void|Obtains the MIME type of a file.|
|getType?(uri: string, callback: AsyncCallback\<string>): void|Obtains the MIME type matching the data specified by the URI.| |getType(uri: string, callback: AsyncCallback\<string>): void|Obtains the MIME type matching the data specified by the URI.|
|executeBatch?(ops: Array\<DataAbilityOperation>, callback: AsyncCallback\<Array\<DataAbilityResult>>): void|Operates data in the database in batches.| |executeBatch(ops: Array\<DataAbilityOperation>, callback: AsyncCallback\<Array\<DataAbilityResult>>): void|Operates data in the database in batches.|
|call?(method: string, arg: string, extras: PacMap, callback: AsyncCallback\<PacMap>): void|Calls a custom API.| |call(method: string, arg: string, extras: PacMap, callback: AsyncCallback\<PacMap>): void|Calls a custom API.|
## How to Develop ## How to Develop
...@@ -55,6 +55,7 @@ Example URIs: ...@@ -55,6 +55,7 @@ Example URIs:
The following code snippet shows how to create a Data ability: The following code snippet shows how to create a Data ability:
```javascript ```javascript
import featureAbility from '@ohos.ability.featureAbility'
import dataAbility from '@ohos.data.dataAbility' import dataAbility from '@ohos.data.dataAbility'
import dataRdb from '@ohos.data.rdb' import dataRdb from '@ohos.data.rdb'
...@@ -66,7 +67,8 @@ Example URIs: ...@@ -66,7 +67,8 @@ Example URIs:
export default { export default {
onInitialized(abilityInfo) { onInitialized(abilityInfo) {
console.info('DataAbility onInitialized, abilityInfo:' + abilityInfo.bundleName) console.info('DataAbility onInitialized, abilityInfo:' + abilityInfo.bundleName)
dataRdb.getRdbStore(STORE_CONFIG, 1, (err, store) => { let context = featureAbility.getContext()
dataRdb.getRdbStore(context, STORE_CONFIG, 1, (err, store) => {
console.info('DataAbility getRdbStore callback') console.info('DataAbility getRdbStore callback')
store.executeSql(SQL_CREATE_TABLE, []) store.executeSql(SQL_CREATE_TABLE, [])
rdbStore = store rdbStore = store
......
# Page Ability Development # Page Ability Development
## Overview ## Overview
### Concepts ### Concepts
The Page ability implements the ArkUI and provides the capability of interacting with developers. When you create an ability in DevEco Studio, DevEco Studio automatically creates template code. The capabilities related to the Page ability are implemented through the **featureAbility**, and the lifecycle callbacks are implemented through the callbacks in **app.js** or **app.ets**.
The Page ability implements the ArkUI and provides the capability of interacting with developers. When you create an ability in DevEco Studio, DevEco Studio automatically creates template code.
The capabilities related to the Page ability are implemented through the **featureAbility**, and the lifecycle callbacks are implemented through the callbacks in **app.js** or **app.ets**.
### Page Ability Lifecycle ### Page Ability Lifecycle
**Ability lifecycle** Introduction to the Page ability lifecycle:
The Page ability lifecycle defines all states of a Page ability, such as **INACTIVE**, **ACTIVE**, and **BACKGROUND**. The Page ability lifecycle defines all states of a Page ability, such as **INACTIVE**, **ACTIVE**, and **BACKGROUND**.
...@@ -27,27 +31,30 @@ Description of ability lifecycle states: ...@@ -27,27 +31,30 @@ Description of ability lifecycle states:
- **BACKGROUND**: The Page ability runs in the background. After being re-activated, the Page ability enters the **ACTIVE** state. After being destroyed, the Page ability enters the **INITIAL** state. - **BACKGROUND**: The Page ability runs in the background. After being re-activated, the Page ability enters the **ACTIVE** state. After being destroyed, the Page ability enters the **INITIAL** state.
**The following figure shows the relationship between lifecycle callbacks and lifecycle states of the Page ability.** The following figure shows the relationship between lifecycle callbacks and lifecycle states of the Page ability.
![fa-pageAbility-lifecycle](figures/fa-pageAbility-lifecycle.png) ![fa-pageAbility-lifecycle](figures/fa-pageAbility-lifecycle.png)
You can override the lifecycle callbacks provided by the Page ability in the **app.js** or **app.ets** file. Currently, the **app.js** file provides only the **onCreate** and **onDestroy** callbacks, and the **app.ets** file provides the full lifecycle callbacks. You can override the lifecycle callbacks provided by the Page ability in the **app.js** or **app.ets** file. Currently, the **app.js** file provides only the **onCreate** and **onDestroy** callbacks, and the **app.ets** file provides the full lifecycle callbacks.
### Launch Type ### Launch Type
The ability supports two launch types: singleton and multi-instance. The ability supports two launch types: singleton and multi-instance.
You can specify the launch type by setting **launchType** in the **config.json** file. You can specify the launch type by setting **launchType** in the **config.json** file.
**Table 1** Introduction to startup mode **Table 1** Startup modes
| Launch Type | Description |Description | | Launch Type | Description |Description |
| ----------- | ------- |---------------- | | ----------- | ------- |---------------- |
| standard | Multi-instance | A new instance is started each time an ability starts.| | standard | Multi-instance | A new instance is started each time an ability starts.|
| singleton | Singleton | Only one instance exists in the system. If an instance already exists when an ability is started, that instance is reused.| | singleton | Singleton | The ability has only one instance in the system. If an instance already exists when an ability is started, that instance is reused.|
By default, **singleton** is used. By default, **singleton** is used.
## Development Guidelines ## Development Guidelines
### Available APIs ### Available APIs
**Table 2** APIs provided by featureAbility **Table 2** APIs provided by featureAbility
...@@ -73,23 +80,21 @@ By default, **singleton** is used. ...@@ -73,23 +80,21 @@ By default, **singleton** is used.
```javascript ```javascript
import featureAbility from '@ohos.ability.featureAbility' import featureAbility from '@ohos.ability.featureAbility'
featureAbility.startAbility({ featureAbility.startAbility({
want: want: {
{ action: "",
action: "", entities: [""],
entities: [""], type: "",
type: "", deviceId: "",
deviceId: "", bundleName: "com.example.myapplication",
bundleName: "com.example.myapplication", /* In the FA model, abilityName consists of package and ability name. */
/* In the FA model, abilityName consists of package and ability name. */ abilityName: "com.example.entry.secondAbility",
abilityName: "com.example.entry.secondAbility", uri: ""
uri: "" }
}, });
},
);
``` ```
### Starting a Remote Page Ability ### Starting a Remote Page Ability
>Note >NOTE
> >
>This feature applies only to system applications, since the **getTrustedDeviceListSync** API of the **DeviceManager** class is open only to system applications. >This feature applies only to system applications, since the **getTrustedDeviceListSync** API of the **DeviceManager** class is open only to system applications.
......
...@@ -10,7 +10,7 @@ IPC/RPC enables a proxy and a stub that run on different processes to communicat ...@@ -10,7 +10,7 @@ IPC/RPC enables a proxy and a stub that run on different processes to communicat
| Class/Interface | Function | Description | | Class/Interface | Function | Description |
| --------------- | -------- | ----------- | | --------------- | -------- | ----------- |
| IRemoteBroker | sptr<IRemoteObject> AsObject() | Obtains the holder of a remote proxy object. This method must be implemented by the derived classes of **IRemoteBroker**. If you call this method on the stub, the **RemoteObject** is returned; if you call this method on the proxy, the proxy object is returned. | | [IRemoteBroker](../reference/apis/js-apis-rpc.md#iremotebroker) | sptr<IRemoteObject> AsObject() | Obtains the holder of a remote proxy object. This method must be implemented by the derived classes of **IRemoteBroker**. If you call this method on the stub, the **RemoteObject** is returned; if you call this method on the proxy, the proxy object is returned. |
| IRemoteStub | virtual int OnRemoteRequest(uint32_t code, MessageParcel &data, MessageParcel &reply, MessageOption &option) | Called to process a request from the proxy and return the result. Derived classes need to override this method. | | IRemoteStub | virtual int OnRemoteRequest(uint32_t code, MessageParcel &data, MessageParcel &reply, MessageOption &option) | Called to process a request from the proxy and return the result. Derived classes need to override this method. |
| IRemoteProxy | | Service proxy classes are derived from the **IRemoteProxy** class. | | IRemoteProxy | | Service proxy classes are derived from the **IRemoteProxy** class. |
...@@ -25,10 +25,10 @@ IPC/RPC enables a proxy and a stub that run on different processes to communicat ...@@ -25,10 +25,10 @@ IPC/RPC enables a proxy and a stub that run on different processes to communicat
``` ```
class ITestAbility : public IRemoteBroker { class ITestAbility : public IRemoteBroker {
public: public:
// DECLARE_INTERFACE_DESCRIPTOR is mandatory, and the input parameter is std::u16string. // DECLARE_INTERFACE_DESCRIPTOR is mandatory, and the input parameter is std::u16string.
DECLARE_INTERFACE_DESCRIPTOR(u"test.ITestAbility"); DECLARE_INTERFACE_DESCRIPTOR(u"test.ITestAbility");
int TRANS_ID_PING_ABILITY = 1; // Define the message code. int TRANS_ID_PING_ABILITY = 1; // Define the message code.
virtual int TestPingAbility(const std::u16string &dummy) = 0; // Define functions. virtual int TestPingAbility(const std::u16string &dummy) = 0; // Define functions.
}; };
``` ```
......
# Device Usage Statistics # Device Usage Statistics
- [Device Usage Statistics Overview](device-usage-statistics-overview.md) - [Device Usage Statistics Overview](device-usage-statistics-overview.md)
- [Device Usage Statistics Development](device-usage-statistics-dev-guide.md) - [Device Usage Statistics Development](device-usage-statistics-use-guide.md)
...@@ -66,25 +66,7 @@ To learn more about the APIs for obtaining device location information, see [Geo ...@@ -66,25 +66,7 @@ To learn more about the APIs for obtaining device location information, see [Geo
If your application needs to access the device location information when running on the background, it must be configured to be able to run on the background and be granted the **ohos.permission.LOCATION_IN_BACKGROUND** permission. In this way, the system continues to report device location information after your application moves to the background. If your application needs to access the device location information when running on the background, it must be configured to be able to run on the background and be granted the **ohos.permission.LOCATION_IN_BACKGROUND** permission. In this way, the system continues to report device location information after your application moves to the background.
To allow your application to access device location information, declare the required permissions in the **module.json** file of your application. The sample code is as follows: You can declare the required permission in your application's configuration file. For details, see [Access Control (Permission) Development](../security/accesstoken-guidelines.md).
```
{
"module": {
"reqPermissions": [
"name": "ohos.permission.LOCATION",
"reason": "$string:reason_description",
"usedScene": {
"ability": ["com.myapplication.LocationAbility"],
"when": "inuse"
}
]
}
}
```
For details about these fields, see [Application Package Structure Configuration File](../quick-start/stage-structure.md).
2. Import the **geolocation** module by which you can implement all APIs related to the basic location capabilities. 2. Import the **geolocation** module by which you can implement all APIs related to the basic location capabilities.
......
# Media # Media
- Audio - Audio
- [Audio Overview](audio-overview.md) - [Audio Overview](audio-overview.md)
- [Audio Playback Development](audio-playback.md) - [Audio Playback Development](audio-playback.md)
- [Audio Recording Development](audio-recorder.md) - [Audio Recording Development](audio-recorder.md)
- [Audio Rendering Development](audio-renderer.md) - [Audio Rendering Development](audio-renderer.md)
- [Audio Stream Management Development](audio-stream-manager.md) - [Audio Stream Management Development](audio-stream-manager.md)
- [Audio Capture Development](audio-capturer.md) - [Audio Capture Development](audio-capturer.md)
- [OpenSL ES Audio Playback Development](opensles-playback.md) - [OpenSL ES Audio Playback Development](opensles-playback.md)
- [OpenSL ES Audio Recording Development](opensles-capture.md) - [OpenSL ES Audio Recording Development](opensles-capture.md)
- [Audio Interruption Mode Development](audio-interruptmode.md) - [Audio Interruption Mode Development](audio-interruptmode.md)
- Video - Video
- [Video Playback Development](video-playback.md) - [Video Playback Development](video-playback.md)
- [Video Recording Development](video-recorder.md) - [Video Recording Development](video-recorder.md)
- Image
- Image
- [Image Development](image.md) - [Image Development](image.md)
- Camera - Camera
- [Camera Development](camera.md) - [Camera Development](camera.md)
- [Distributed Camera Development](remote-camera.md) - [Distributed Camera Development](remote-camera.md)
# Audio Capture Development # Audio Capture Development
## When to Use ## Introduction
You can use the APIs provided by **AudioCapturer** to record raw audio files. You can use the APIs provided by **AudioCapturer** to record raw audio files, thereby implementing audio data collection.
### State Check **Status check**: During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioCapturer** instance. This is because some operations can be performed only when the audio capturer is in a given state. If the application performs an operation when the audio capturer is not in the given state, the system may throw an exception or generate other undefined behavior.
During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioCapturer** instance. This is because some operations can be performed only when the audio capturer is in a given state. If the application performs an operation when the audio capturer is not in the given state, the system may throw an exception or generate other undefined behavior. ## Working Principles
For details about the APIs, see [AudioCapturer in Audio Management](../reference/apis/js-apis-audio.md#audiocapturer8). This following figure shows the audio capturer state transitions.
**Figure 1** Audio capturer state transitions
![audio-capturer-state](figures/audio-capturer-state.png)
- **PREPARED**: The audio capturer enters this state by calling **create()**.
- **RUNNING**: The audio capturer enters this state by calling **start()** when it is in the **PREPARED** state or by calling **start()** when it is in the **STOPPED** state.
- **STOPPED**: The audio capturer in the **RUNNING** state can call **stop()** to stop playing audio data.
- **RELEASED**: The audio capturer in the **PREPARED** or **STOPPED** state can use **release()** to release all occupied hardware and software resources. It will not transit to any other state after it enters the **RELEASED** state.
**Figure 1** Audio capturer state ## Constraints
![](figures/audio-capturer-state.png) Before developing the audio data collection feature, configure the **ohos.permission.MICROPHONE** permission for your application. For details about permission configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop ## How to Develop
For details about the APIs, see [AudioCapturer in Audio Management](../reference/apis/js-apis-audio.md#audiocapturer8).
1. Use **createAudioCapturer()** to create an **AudioCapturer** instance. 1. Use **createAudioCapturer()** to create an **AudioCapturer** instance.
Set parameters of the **AudioCapturer** instance in **audioCapturerOptions**. This instance is used to capture audio, control and obtain the recording status, and register a callback for notification. Set parameters of the **AudioCapturer** instance in **audioCapturerOptions**. This instance is used to capture audio, control and obtain the recording state, and register a callback for notification.
```js ```js
var audioStreamInfo = { import audio from '@ohos.multimedia.audio';
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
}
var audioCapturerInfo = {
source: audio.SourceType.SOURCE_TYPE_MIC,
capturerFlags: 1
}
var audioCapturerOptions = {
streamInfo: audioStreamInfo,
capturerInfo: audioCapturerInfo
}
let audioCapturer = await audio.createAudioCapturer(audioCapturerOptions);
var state = audioRenderer.state;
```
2. (Optional) Use **on('stateChange')** to subscribe to audio renderer state changes.
If an application needs to perform some operations when the audio renderer state is updated, the application can subscribe to the state changes. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
```js let audioStreamInfo = {
audioCapturer.on('stateChange',(state) => { samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
console.info('AudioCapturerLog: Changed State to : ' + state) channels: audio.AudioChannel.CHANNEL_1,
switch (state) { sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
case audio.AudioState.STATE_PREPARED: encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
console.info('--------CHANGE IN AUDIO STATE----------PREPARED--------------'); }
console.info('Audio State is : Prepared');
break; let audioCapturerInfo = {
case audio.AudioState.STATE_RUNNING: source: audio.SourceType.SOURCE_TYPE_MIC,
console.info('--------CHANGE IN AUDIO STATE----------RUNNING--------------'); capturerFlags: 0 // 0 is the extended flag bit of the audio capturer. The default value is 0.
console.info('Audio State is : Running'); }
break;
case audio.AudioState.STATE_STOPPED: let audioCapturerOptions = {
console.info('--------CHANGE IN AUDIO STATE----------STOPPED--------------'); streamInfo: audioStreamInfo,
console.info('Audio State is : stopped'); capturerInfo: audioCapturerInfo
break; }
case audio.AudioState.STATE_RELEASED:
console.info('--------CHANGE IN AUDIO STATE----------RELEASED--------------'); let audioCapturer = await audio.createAudioCapturer(audioCapturerOptions);
console.info('Audio State is : released'); console.log('AudioRecLog: Create audio capturer success.');
break;
default:
console.info('--------CHANGE IN AUDIO STATE----------INVALID--------------');
console.info('Audio State is : invalid');
break;
}
});
``` ```
3. Use **start()** to start audio recording. 2. Use **start()** to start audio recording.
The capturer state will be **STATE_RUNNING** once the audio capturer is started. The application can then begin reading buffers. The capturer state will be **STATE_RUNNING** once the audio capturer is started. The application can then begin reading buffers.
```js ```js
await audioCapturer.start(); import audio from '@ohos.multimedia.audio';
if (audioCapturer.state == audio.AudioState.STATE_RUNNING) {
console.info('AudioRecLog: Capturer started'); async function startCapturer() {
} else { let state = audioCapturer.state;
console.info('AudioRecLog: Capturer start failed'); // The audio capturer should be in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state after being started.
} if (state != audio.AudioState.STATE_PREPARED || state != audio.AudioState.STATE_PAUSED ||
``` state != audio.AudioState.STATE_STOPPED) {
console.info('Capturer is not in a correct state to start');
4. Use **getBufferSize()** to obtain the minimum buffer size to read. return;
}
await audioCapturer.start();
```js let state = audioCapturer.state;
var bufferSize = await audioCapturer.getBufferSize(); if (state == audio.AudioState.STATE_RUNNING) {
console.info('AudioRecLog: buffer size: ' + bufferSize); console.info('AudioRecLog: Capturer started');
} else {
console.error('AudioRecLog: Capturer start failed');
}
}
``` ```
5. Read the captured audio data and convert it to a byte stream. Call **read()** repeatedly to read the data until the application wants to stop the recording. 3. Read the captured audio data and convert it to a byte stream. Call **read()** repeatedly to read the data until the application stops the recording.
The following example shows how to write recorded data into a file. The following example shows how to write recorded data into a file.
```js ```js
import fileio from '@ohos.fileio'; import fileio from '@ohos.fileio';
let state = audioCapturer.state;
// The read operation can be performed only when the state is STATE_RUNNING.
if (state != audio.AudioState.STATE_RUNNING) {
console.info('Capturer is not in a correct state to read');
return;
}
const path = '/data/data/.pulse_dir/capture_js.wav'; const path = '/data/data/.pulse_dir/capture_js.wav'; // Path for storing the collected audio file.
let fd = fileio.openSync(path, 0o102, 0o777); let fd = fileio.openSync(path, 0o102, 0o777);
if (fd !== null) { if (fd !== null) {
console.info('AudioRecLog: file fd created'); console.info('AudioRecLog: file fd created');
...@@ -115,38 +110,140 @@ If an application needs to perform some operations when the audio renderer state ...@@ -115,38 +110,140 @@ If an application needs to perform some operations when the audio renderer state
console.info('AudioRecLog: file fd opened in append mode'); console.info('AudioRecLog: file fd opened in append mode');
} }
var numBuffersToCapture = 150; let numBuffersToCapture = 150; // Write data for 150 times.
while (numBuffersToCapture) { while (numBuffersToCapture) {
var buffer = await audioCapturer.read(bufferSize, true); let buffer = await audioCapturer.read(bufferSize, true);
if (typeof(buffer) == undefined) { if (typeof(buffer) == undefined) {
console.info('read buffer failed'); console.info('AudioRecLog: read buffer failed');
} else { } else {
var number = fileio.writeSync(fd, buffer); let number = fileio.writeSync(fd, buffer);
console.info('AudioRecLog: data written: ' + number); console.info(`AudioRecLog: data written: ${number}`);
} }
numBuffersToCapture--; numBuffersToCapture--;
} }
``` ```
6. Once the recording is complete, call **stop()** to stop the recording. 4. Once the recording is complete, call **stop()** to stop the recording.
```js
async function StopCapturer() {
let state = audioCapturer.state;
// The audio capturer can be stopped only when it is in STATE_RUNNING or STATE_PAUSED state.
if (state != audio.AudioState.STATE_RUNNING && state != audio.AudioState.STATE_PAUSED) {
console.info('AudioRecLog: Capturer is not running or paused');
return;
}
await audioCapturer.stop();
state = audioCapturer.state;
if (state == audio.AudioState.STATE_STOPPED) {
console.info('AudioRecLog: Capturer stopped');
} else {
console.error('AudioRecLog: Capturer stop failed');
}
}
``` ```
await audioCapturer.stop();
if (audioCapturer.state == audio.AudioState.STATE_STOPPED) { 5. After the task is complete, call **release()** to release related resources.
console.info('AudioRecLog: Capturer stopped');
} else { ```js
console.info('AudioRecLog: Capturer stop failed'); async function releaseCapturer() {
} let state = audioCapturer.state;
// The audio capturer can be released only when it is not in the STATE_RELEASED or STATE_NEW state.
if (state == audio.AudioState.STATE_RELEASED || state == audio.AudioState.STATE_NEW) {
console.info('AudioRecLog: Capturer already released');
return;
}
await audioCapturer.release();
state = audioCapturer.state;
if (state == audio.AudioState.STATE_RELEASED) {
console.info('AudioRecLog: Capturer released');
} else {
console.info('AudioRecLog: Capturer release failed');
}
}
``` ```
7. After the task is complete, call **release()** to release related resources. 6. (Optional) Obtain the audio capturer information.
You can use the following code to obtain the audio capturer information:
```js ```js
await audioCapturer.release(); // Obtain the audio capturer state.
if (audioCapturer.state == audio.AudioState.STATE_RELEASED) { let state = audioCapturer.state;
console.info('AudioRecLog: Capturer released');
} else { // Obtain the audio capturer information.
console.info('AudioRecLog: Capturer release failed'); let audioCapturerInfo : audio.AuduioCapturerInfo = await audioCapturer.getCapturerInfo();
}
// Obtain the audio stream information.
let audioStreamInfo : audio.AudioStreamInfo = await audioCapturer.getStreamInfo();
// Obtain the audio stream ID.
let audioStreamId : number = await audioCapturer.getAudioStreamId();
// Obtain the Unix timestamp, in nanoseconds.
let audioTime : number = await audioCapturer.getAudioTime();
// Obtain a proper minimum buffer size.
let bufferSize : number = await audioCapturer.getBuffersize();
``` ```
7. (Optional) Use **on('markReach')** to subscribe to the mark reached event, and use **off('markReach')** to unsubscribe from the event.
After the mark reached event is subscribed to, when the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned.
```js
audioCapturer.on('markReach', (reachNumber) => {
console.info('Mark reach event Received');
console.info(`The Capturer reached frame: ${reachNumber}`);
});
audioCapturer.off('markReach'); // Unsubscribe from the mark reached event. This event will no longer be listened for.
```
8. (Optional) Use **on('periodReach')** to subscribe to the period reached event, and use **off('periodReach')** to unsubscribe from the event.
After the period reached event is subscribed to, each time the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned.
```js
audioCapturer.on('periodReach', (reachNumber) => {
console.info('Period reach event Received');
console.info(`In this period, the Capturer reached frame: ${reachNumber}`);
});
audioCapturer.off('periodReach'); // Unsubscribe from the period reached event. This event will no longer be listened for.
```
9. If your application needs to perform some operations when the audio capturer state is updated, it can subscribe to the state change event. When the audio capturer state is updated, the application receives a callback containing the event type.
```js
audioCapturer.on('stateChange', (state) => {
console.info(`AudioCapturerLog: Changed State to : ${state}`)
switch (state) {
case audio.AudioState.STATE_PREPARED:
console.info('--------CHANGE IN AUDIO STATE----------PREPARED--------------');
console.info('Audio State is : Prepared');
break;
case audio.AudioState.STATE_RUNNING:
console.info('--------CHANGE IN AUDIO STATE----------RUNNING--------------');
console.info('Audio State is : Running');
break;
case audio.AudioState.STATE_STOPPED:
console.info('--------CHANGE IN AUDIO STATE----------STOPPED--------------');
console.info('Audio State is : stopped');
break;
case audio.AudioState.STATE_RELEASED:
console.info('--------CHANGE IN AUDIO STATE----------RELEASED--------------');
console.info('Audio State is : released');
break;
default:
console.info('--------CHANGE IN AUDIO STATE----------INVALID--------------');
console.info('Audio State is : invalid');
break;
}
});
```
# Audio Interruption Mode Development # Audio Interruption Mode Development
## When to Use ## Introduction
The audio interruption mode is used to control the playback of multiple audio streams.<br> The audio interruption mode is used to control the playback of multiple audio streams.
Audio applications can set the audio interruption mode to independent or shared under **AudioRenderer**.<br>
In shared mode, multiple audio streams share one session ID. In independent mode, each audio stream has an independent session ID. Audio applications can set the audio interruption mode to independent or shared under **AudioRenderer**.
### Asynchronous Operations In shared mode, multiple audio streams share one session ID. In independent mode, each audio stream has an independent session ID.
To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions. **Asynchronous operation**: To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions.
## How to Develop ## How to Develop
For details about the APIs, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8). For details about the APIs, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8).
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.
Set parameters of the **AudioRenderer** instance in **audioRendererOptions**.
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.<br> This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.
Set parameters of the **AudioRenderer** instance in **audioRendererOptions**.<br>
This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.<br> ```js
```js
import audio from '@ohos.multimedia.audio'; import audio from '@ohos.multimedia.audio';
var audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
}
var audioRendererInfo = { var audioStreamInfo = {
content: audio.ContentType.CONTENT_TYPE_SPEECH, samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION, channels: audio.AudioChannel.CHANNEL_1,
rendererFlags: 1 sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
} encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
}
var audioRendererInfo = {
content: audio.ContentType.CONTENT_TYPE_SPEECH,
usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION,
rendererFlags: 1
}
var audioRendererOptions = { var audioRendererOptions = {
streamInfo: audioStreamInfo, streamInfo: audioStreamInfo,
rendererInfo: audioRendererInfo rendererInfo: audioRendererInfo
} }
let audioRenderer = await audio.createAudioRenderer(audioRendererOptions); let audioRenderer = await audio.createAudioRenderer(audioRendererOptions);
``` ```
2. Set the audio interruption mode.
2. Set the audio interruption mode.
After the **AudioRenderer** instance is initialized, you can set the audio interruption mode.<br> After the **AudioRenderer** instance is initialized, you can set the audio interruption mode.<br>
```js ```js
......
# Audio Playback Development # Audio Playback Development
## When to Use ## Introduction
You can use audio playback APIs to convert audio data into audible analog signals, play the signals using output devices, and manage playback tasks. You can use audio playback APIs to convert audio data into audible analog signals and play the signals using output devices. You can also manage playback tasks. For example, you can start, suspend, stop playback, release resources, set the volume, seek to a playback position, and obtain track information.
**Figure 1** Playback status ## Working Principles
The following figures show the audio playback status changes and the interaction with external modules for audio playback.
**Figure 1** Audio playback state transition
![en-us_image_audio_state_machine](figures/en-us_image_audio_state_machine.png) ![en-us_image_audio_state_machine](figures/en-us_image_audio_state_machine.png)
**Note**: If the status is **Idle**, setting the **src** attribute does not change the status. In addition, after the **src** attribute is set successfully, you must call **reset()** before setting it to another value. **NOTE**: If the status is **Idle**, setting the **src** attribute does not change the status. In addition, after the **src** attribute is set successfully, you must call **reset()** before setting it to another value.
**Figure 2** Layer 0 diagram of audio playback **Figure 2** Interaction with external modules for audio playback
![en-us_image_audio_player](figures/en-us_image_audio_player.png) ![en-us_image_audio_player](figures/en-us_image_audio_player.png)
**NOTE**: When a third-party application calls the JS interface provided by the JS interface layer to implement a feature, the framework layer invokes the audio component through the media service of the native framework and outputs the audio data decoded by the software to the audio HDI of the hardware interface layer to implement audio playback.
## How to Develop ## How to Develop
For details about the APIs, see [AudioPlayer in the Media API](../reference/apis/js-apis-media.md#audioplayer). For details about the APIs, see [AudioPlayer in the Media API](../reference/apis/js-apis-media.md#audioplayer).
> **NOTE**
>
> The method for obtaining the path in the FA model is different from that in the stage model. **pathDir** used in the sample code below is an example. You need to obtain the path based on project requirements. For details about how to obtain the path, see [Application Sandbox Path Guidelines](../reference/apis/js-apis-fileio.md#guidelines).
### Full-Process Scenario ### Full-Process Scenario
The full audio playback process includes creating an instance, setting the URI, playing audio, seeking to the playback position, setting the volume, pausing playback, obtaining track information, stopping playback, resetting the player, and releasing resources. The full audio playback process includes creating an instance, setting the URI, playing audio, seeking to the playback position, setting the volume, pausing playback, obtaining track information, stopping playback, resetting the player, and releasing resources.
...@@ -99,8 +109,9 @@ async function audioPlayerDemo() { ...@@ -99,8 +109,9 @@ async function audioPlayerDemo() {
setCallBack(audioPlayer); // Set the event callbacks. setCallBack(audioPlayer); // Set the event callbacks.
// 2. Set the URI of the audio file. // 2. Set the URI of the audio file.
let fdPath = 'fd://' let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command. let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3'; // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => { await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber; fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath); console.info('open fd success fd is' + fdPath);
...@@ -118,6 +129,7 @@ async function audioPlayerDemo() { ...@@ -118,6 +129,7 @@ async function audioPlayerDemo() {
```js ```js
import media from '@ohos.multimedia.media' import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio' import fileIO from '@ohos.fileio'
export class AudioDemo { export class AudioDemo {
// Set the player callbacks. // Set the player callbacks.
setCallBack(audioPlayer) { setCallBack(audioPlayer) {
...@@ -139,8 +151,9 @@ export class AudioDemo { ...@@ -139,8 +151,9 @@ export class AudioDemo {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance. let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks. this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://' let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command. let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3'; // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => { await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber; fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath); console.info('open fd success fd is' + fdPath);
...@@ -159,6 +172,7 @@ export class AudioDemo { ...@@ -159,6 +172,7 @@ export class AudioDemo {
```js ```js
import media from '@ohos.multimedia.media' import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio' import fileIO from '@ohos.fileio'
export class AudioDemo { export class AudioDemo {
// Set the player callbacks. // Set the player callbacks.
private isNextMusic = false; private isNextMusic = false;
...@@ -185,8 +199,9 @@ export class AudioDemo { ...@@ -185,8 +199,9 @@ export class AudioDemo {
async nextMusic(audioPlayer) { async nextMusic(audioPlayer) {
this.isNextMusic = true; this.isNextMusic = true;
let nextFdPath = 'fd://' let nextFdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\02.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command. let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
let nextpath = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/02.mp3'; // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\02.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let nextpath = pathDir + '/02.mp3'
await fileIO.open(nextpath).then((fdNumber) => { await fileIO.open(nextpath).then((fdNumber) => {
nextFdPath = nextFdPath + '' + fdNumber; nextFdPath = nextFdPath + '' + fdNumber;
console.info('open fd success fd is' + nextFdPath); console.info('open fd success fd is' + nextFdPath);
...@@ -202,8 +217,9 @@ export class AudioDemo { ...@@ -202,8 +217,9 @@ export class AudioDemo {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance. let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks. this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://' let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command. let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3'; // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => { await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber; fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath); console.info('open fd success fd is' + fdPath);
...@@ -222,6 +238,7 @@ export class AudioDemo { ...@@ -222,6 +238,7 @@ export class AudioDemo {
```js ```js
import media from '@ohos.multimedia.media' import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio' import fileIO from '@ohos.fileio'
export class AudioDemo { export class AudioDemo {
// Set the player callbacks. // Set the player callbacks.
setCallBack(audioPlayer) { setCallBack(audioPlayer) {
...@@ -239,8 +256,9 @@ export class AudioDemo { ...@@ -239,8 +256,9 @@ export class AudioDemo {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance. let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks. this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://' let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command. let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3'; // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => { await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber; fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath); console.info('open fd success fd is' + fdPath);
......
# Audio Recording Development # Audio Recording Development
## When to Use ## Introduction
During audio recording, audio signals are captured, encoded, and saved to files. You can specify parameters such as the sampling rate, number of audio channels, encoding format, encapsulation format, and file path for audio recording. During audio recording, audio signals are captured, encoded, and saved to files. You can specify parameters such as the sampling rate, number of audio channels, encoding format, encapsulation format, and output file path for audio recording.
## Working Principles
The following figures show the audio recording state transition and the interaction with external modules for audio recording.
**Figure 1** Audio recording state transition **Figure 1** Audio recording state transition
...@@ -10,10 +14,16 @@ During audio recording, audio signals are captured, encoded, and saved to files. ...@@ -10,10 +14,16 @@ During audio recording, audio signals are captured, encoded, and saved to files.
**Figure 2** Layer 0 diagram of audio recording **Figure 2** Interaction with external modules for audio recording
![en-us_image_audio_recorder_zero](figures/en-us_image_audio_recorder_zero.png) ![en-us_image_audio_recorder_zero](figures/en-us_image_audio_recorder_zero.png)
**NOTE**: When a third-party recording application or recorder calls the JS interface provided by the JS interface layer to implement a feature, the framework layer invokes the audio component through the media service of the native framework to obtain the audio data captured through the audio HDI. The framework layer then encodes the audio data through software and saves the encoded and encapsulated audio data to a file to implement audio recording.
## Constraints
Before developing audio recording, configure the **ohos.permission.MICROPHONE** permission for your application. For details about the configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop ## How to Develop
For details about the APIs, see [AudioRecorder in the Media API](../reference/apis/js-apis-media.md#audiorecorder). For details about the APIs, see [AudioRecorder in the Media API](../reference/apis/js-apis-media.md#audiorecorder).
......
此差异已折叠。
# OpenSL ES Audio Recording Development # OpenSL ES Audio Recording Development
## When to Use ## Introduction
You can use OpenSL ES to develop the audio recording function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned. You can use OpenSL ES to develop the audio recording function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned.
......
# OpenSL ES Audio Playback Development # OpenSL ES Audio Playback Development
## When to Use ## Introduction
You can use OpenSL ES to develop the audio playback function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned. You can use OpenSL ES to develop the audio playback function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned.
...@@ -58,7 +58,7 @@ To use OpenSL ES to develop the audio playback function in OpenHarmony, perform ...@@ -58,7 +58,7 @@ To use OpenSL ES to develop the audio playback function in OpenHarmony, perform
5. Obtain the **bufferQueueItf** instance of the **SL_IID_OH_BUFFERQUEUE** interface. 5. Obtain the **bufferQueueItf** instance of the **SL_IID_OH_BUFFERQUEUE** interface.
``` ```c++
SLOHBufferQueueItf bufferQueueItf; SLOHBufferQueueItf bufferQueueItf;
(*pcmPlayerObject)->GetInterface(pcmPlayerObject, SL_IID_OH_BUFFERQUEUE, &bufferQueueItf); (*pcmPlayerObject)->GetInterface(pcmPlayerObject, SL_IID_OH_BUFFERQUEUE, &bufferQueueItf);
``` ```
......
# Video Playback Development # Video Playback Development
## When to Use ## Introduction
You can use video playback APIs to convert video data into visible signals, play the signals using output devices, and manage playback tasks. This document describes development for the following video playback scenarios: full-process, normal playback, video switching, and loop playback. You can use video playback APIs to convert audio data into audible analog signals and play the signals using output devices. You can also manage playback tasks. For example, you can start, suspend, stop playback, release resources, set the volume, seek to a playback position, set the playback speed, and obtain track information. This document describes development for the following video playback scenarios: full-process, normal playback, video switching, and loop playback.
## Working Principles
The following figures show the video playback state transition and the interaction with external modules for video playback.
**Figure 1** Video playback state transition **Figure 1** Video playback state transition
![en-us_image_video_state_machine](figures/en-us_image_video_state_machine.png) ![en-us_image_video_state_machine](figures/en-us_image_video_state_machine.png)
**Figure 2** Layer 0 diagram of video playback **Figure 2** Interaction with external modules for video playback
![en-us_image_video_player](figures/en-us_image_video_player.png) ![en-us_image_video_player](figures/en-us_image_video_player.png)
**NOTE**: When a third-party application calls a JS interface provided by the JS interface layer, the framework layer invokes the audio component through the media service of the native framework to output the audio data decoded by the software to the audio HDI. The graphics subsystem outputs the image data decoded by the codec HDI at the hardware interface layer to the display HDI. In this way, video playback is implemented.
*Note: Video playback requires hardware capabilities such as display, audio, and codec.* *Note: Video playback requires hardware capabilities such as display, audio, and codec.*
1. A third-party application obtains a surface ID from the XComponent. 1. A third-party application obtains a surface ID from the XComponent.
......
# Video Recording Development # Video Recording Development
## When to Use ## Introduction
During video recording, audio and video signals are captured, encoded, and saved to files. You can specify parameters such as the encoding format, encapsulation format, and file path for video recording. You can use video recording APIs to capture audio and video signals, encode them, and save them to files. You can start, suspend, resume, and stop recording, and release resources. You can also specify parameters such as the encoding format, encapsulation format, and file path for video recording.
## Working Principles
The following figures show the video recording state transition and the interaction with external modules for video recording.
**Figure 1** Video recording state transition **Figure 1** Video recording state transition
![en-us_image_video_recorder_state_machine](figures/en-us_image_video_recorder_state_machine.png) ![en-us_image_video_recorder_state_machine](figures/en-us_image_video_recorder_state_machine.png)
**Figure 2** Layer 0 diagram of video recording **Figure 2** Interaction with external modules for video recording
![en-us_image_video_recorder_zero](figures/en-us_image_video_recorder_zero.png) ![en-us_image_video_recorder_zero](figures/en-us_image_video_recorder_zero.png)
**NOTE**: When a third-party camera application or system camera calls a JS interface provided by the JS interface layer, the framework layer uses the media service of the native framework to invoke the audio component. Through the audio HDI, the audio component captures audio data, encodes the audio data through software, and saves the encoded audio data to a file. The graphics subsystem captures image data through the video HDI, encodes the image data through the video codec HDI, and saves the encoded image data to a file. In this way, video recording is implemented.
## Constraints
Before developing video recording, configure the permissions **ohos.permission.MICROPHONE** and **ohos.permission.CAMERA** for your application. For details about the configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop ## How to Develop
For details about the APIs, see [VideoRecorder in the Media API](../reference/apis/js-apis-media.md#videorecorder9). For details about the APIs, see [VideoRecorder in the Media API](../reference/apis/js-apis-media.md#videorecorder9).
...@@ -147,3 +157,4 @@ export class VideoRecorderDemo { ...@@ -147,3 +157,4 @@ export class VideoRecorderDemo {
} }
} }
``` ```
...@@ -4,6 +4,4 @@ ...@@ -4,6 +4,4 @@
- [Drawing Development](drawing-guidelines.md) - [Drawing Development](drawing-guidelines.md)
- [Raw File Development](rawfile-guidelines.md) - [Raw File Development](rawfile-guidelines.md)
- [Native Window Development](native-window-guidelines.md) - [Native Window Development](native-window-guidelines.md)
- [Using MindSpore Lite for Model Inference](native-window-guidelines.md) - [Using MindSpore Lite for Model Inference](mindspore-lite-guidelines.md)
...@@ -4,11 +4,11 @@ ...@@ -4,11 +4,11 @@
The Native Drawing module provides APIs for drawing 2D graphics and text. The following scenarios are common for drawing development: The Native Drawing module provides APIs for drawing 2D graphics and text. The following scenarios are common for drawing development:
* Drawing 2D graphics * Drawing 2D graphics
* Drawing and painting text * Drawing text drawing
## Available APIs ## Available APIs
| API| Description| | API| Description|
| -------- | -------- | | -------- | -------- |
| OH_Drawing_BitmapCreate (void) | Creates a bitmap object.| | OH_Drawing_BitmapCreate (void) | Creates a bitmap object.|
| OH_Drawing_BitmapBuild (OH_Drawing_Bitmap *, const uint32_t width, const uint32_t height, const OH_Drawing_BitmapFormat *) | Initializes the width and height of a bitmap object and sets the pixel format for the bitmap.| | OH_Drawing_BitmapBuild (OH_Drawing_Bitmap *, const uint32_t width, const uint32_t height, const OH_Drawing_BitmapFormat *) | Initializes the width and height of a bitmap object and sets the pixel format for the bitmap.|
...@@ -19,7 +19,7 @@ The Native Drawing module provides APIs for drawing 2D graphics and text. The fo ...@@ -19,7 +19,7 @@ The Native Drawing module provides APIs for drawing 2D graphics and text. The fo
| OH_Drawing_CanvasDrawPath (OH_Drawing_Canvas *, const OH_Drawing_Path *) | Draws a path.| | OH_Drawing_CanvasDrawPath (OH_Drawing_Canvas *, const OH_Drawing_Path *) | Draws a path.|
| OH_Drawing_PathCreate (void) | Creates a path object.| | OH_Drawing_PathCreate (void) | Creates a path object.|
| OH_Drawing_PathMoveTo (OH_Drawing_Path *, float x, float y) | Sets the start point of a path.| | OH_Drawing_PathMoveTo (OH_Drawing_Path *, float x, float y) | Sets the start point of a path.|
| OH_Drawing_PathLineTo (OH_Drawing_Path *, float x, float y) | Draws a line segment from the last point of a path to the target point. | | OH_Drawing_PathLineTo (OH_Drawing_Path *, float x, float y) | Draws a line segment from the last point of a path to the target point.|
| OH_Drawing_PathClose (OH_Drawing_Path *) | Closes a path. A line segment from the start point to the last point of the path is added.| | OH_Drawing_PathClose (OH_Drawing_Path *) | Closes a path. A line segment from the start point to the last point of the path is added.|
| OH_Drawing_PenCreate (void) | Creates a pen object.| | OH_Drawing_PenCreate (void) | Creates a pen object.|
| OH_Drawing_PenSetAntiAlias (OH_Drawing_Pen *, bool) | Checks whether anti-aliasing is enabled for a pen. If anti-aliasing is enabled, edges will be drawn with partial transparency.| | OH_Drawing_PenSetAntiAlias (OH_Drawing_Pen *, bool) | Checks whether anti-aliasing is enabled for a pen. If anti-aliasing is enabled, edges will be drawn with partial transparency.|
...@@ -138,7 +138,7 @@ The following steps describe how to use the canvas and brush of the Native Drawi ...@@ -138,7 +138,7 @@ The following steps describe how to use the canvas and brush of the Native Drawi
OH_Drawing_BitmapDestory(cBitmap); OH_Drawing_BitmapDestory(cBitmap);
``` ```
## Development Procedure for Text Drawing and Display ## Development Procedure for Text Drawing
The following steps describe how to use the text drawing and display feature of the Native Drawing module. The following steps describe how to use the text drawing and display feature of the Native Drawing module.
1. **Create a canvas and a bitmap.** 1. **Create a canvas and a bitmap.**
...@@ -196,7 +196,8 @@ The following steps describe how to use the text drawing and display feature of ...@@ -196,7 +196,8 @@ The following steps describe how to use the text drawing and display feature of
// Set the maximum width. // Set the maximum width.
double maxWidth = 800.0; double maxWidth = 800.0;
OH_Drawing_TypographyLayout(typography, maxWidth); OH_Drawing_TypographyLayout(typography, maxWidth);
// Set the start position for text display. // Set the start position for drawing the text on the canvas.
double position[2] = {10.0, 15.0}; double position[2] = {10.0, 15.0};
// Draw the text on the canvas.
OH_Drawing_TypographyPaint(typography, cCanvas, position[0], position[1]); OH_Drawing_TypographyPaint(typography, cCanvas, position[0], position[1]);
``` ```
# NativeWindow Development # Native Window Development
## When to Use ## When to Use
`NativeWindow` is a local platform window of OpenHarmony. It provides APIs for you to create a native window from `Surface`, create a native window buffer from `SurfaceBuffer`, and request and flush a buffer. **NativeWindow** is a local platform-based window of OpenHarmony that represents the producer of a graphics queue. It provides APIs for you to create a native window from **Surface**, create a native window buffer from **SurfaceBuffer**, and request and flush a buffer.
The following scenarios are common for native window development: The following scenarios are common for native window development:
* Drawing content using native C++ code and displaying the content on the screen * Request a graphics buffer by using the NAPI provided by **NativeWindow**, write the produced graphics content to the buffer, and flush the buffer to the graphics queue.
* Requesting and flushing a buffer when adapting to EGL `eglswapbuffer` * Request and flush a buffer when adapting to the **eglswapbuffer** interface at the EGL.
## Available APIs ## Available APIs
| API| Description| | API| Description|
| -------- | -------- | | -------- | -------- |
| OH_NativeWindow_CreateNativeWindowFromSurface (void \*pSurface) | Creates a `NativeWindow` instance. A new `NativeWindow` instance is created each time this function is called.| | OH_NativeWindow_CreateNativeWindowFromSurface (void \*pSurface) | Creates a **NativeWindow** instance. A new **NativeWindow** instance is created each time this function is called.|
| OH_NativeWindow_DestroyNativeWindow (struct NativeWindow \*window) | Decreases the reference count of a `NativeWindow` instance by 1 and, when the reference count reaches 0, destroys the instance.| | OH_NativeWindow_DestroyNativeWindow (OHNativeWindow \*window) | Decreases the reference count of a **NativeWindow** instance by 1 and, when the reference count reaches 0, destroys the instance.|
| OH_NativeWindow_CreateNativeWindowBufferFromSurfaceBuffer (void \*pSurfaceBuffer) | Creates a `NativeWindowBuffer` instance. A new `NativeWindowBuffer` instance is created each time this function is called.| | OH_NativeWindow_CreateNativeWindowBufferFromSurfaceBuffer (void \*pSurfaceBuffer) | Creates a **NativeWindowBuffer** instance. A new **NativeWindowBuffer** instance is created each time this function is called.|
| OH_NativeWindow_DestroyNativeWindowBuffer (struct NativeWindowBuffer \*buffer) | Decreases the reference count of a `NativeWindowBuffer` instance by 1 and, when the reference count reaches 0, destroys the instance.| | OH_NativeWindow_DestroyNativeWindowBuffer (OHNativeWindowBuffer \*buffer) | Decreases the reference count of a **NativeWindowBuffer** instance by 1 and, when the reference count reaches 0, destroys the instance.|
| OH_NativeWindow_NativeWindowRequestBuffer (struct NativeWindow \*window struct NativeWindowBuffer \*\*buffer, int \*fenceFd) | Requests a `NativeWindowBuffer` through a `NativeWindow` instance for content production.| | OH_NativeWindow_NativeWindowRequestBuffer (OHNativeWindow \*window, OHNativeWindowBuffer \*\*buffer, int \*fenceFd) | Requests a **NativeWindowBuffer** through a **NativeWindow** instance for content production.|
| OH_NativeWindow_NativeWindowFlushBuffer (struct NativeWindow \*window, struct NativeWindowBuffer \*buffer, int fenceFd, Region region) | Flushes the `NativeWindowBuffer` filled with the content to the buffer queue through a `NativeWindow` instance for content consumption.| | OH_NativeWindow_NativeWindowFlushBuffer (OHNativeWindow \*window, OHNativeWindowBuffer \*buffer, int fenceFd, Region region) | Flushes the **NativeWindowBuffer** filled with the content to the buffer queue through a **NativeWindow** instance for content consumption.|
| OH_NativeWindow_NativeWindowCancelBuffer (struct NativeWindow \*window, struct NativeWindowBuffer \*buffer) | Returns the `NativeWindowBuffer` to the buffer queue through a `NativeWindow` instance, without filling in any content. The `NativeWindowBuffer` can be used for another request.| | OH_NativeWindow_NativeWindowAbortBuffer (OHNativeWindow \*window, OHNativeWindowBuffer \*buffer) | Returns the **NativeWindowBuffer** to the buffer queue through a **NativeWindow** instance, without filling in any content. The **NativeWindowBuffer** can be used for another request.|
| OH_NativeWindow_NativeWindowHandleOpt (struct NativeWindow \*window, int code,...) | Sets or obtains the attributes of a native window, including the width, height, and content format.| | OH_NativeWindow_NativeWindowHandleOpt (OHNativeWindow \*window, int code,...) | Sets or obtains the attributes of a native window, including the width, height, and content format.|
| OH_NativeWindow_GetBufferHandleFromNative (struct NativeWindowBuffer \*buffer) | Obtains the pointer to a `BufferHandle` of a `NativeWindowBuffer` instance.| | OH_NativeWindow_GetBufferHandleFromNative (OHNativeWindowBuffer \*buffer) | Obtains the pointer to a **BufferHandle** of a **NativeWindowBuffer** instance.|
| OH_NativeWindow_NativeObjectReference (void \*obj) | Adds the reference count of a native object.| | OH_NativeWindow_NativeObjectReference (void \*obj) | Adds the reference count of a native object.|
| OH_NativeWindow_NativeObjectUnreference (void \*obj) | Decreases the reference count of a native object and, when the reference count reaches 0, destroys this object.| | OH_NativeWindow_NativeObjectUnreference (void \*obj) | Decreases the reference count of a native object and, when the reference count reaches 0, destroys this object.|
| OH_NativeWindow_GetNativeObjectMagic (void \*obj) | Obtains the magic ID of a native object.| | OH_NativeWindow_GetNativeObjectMagic (void \*obj) | Obtains the magic ID of a native object.|
| OH_NativeWindow_NativeWindowSetScalingMode (OHNativeWindow \*window, uint32_t sequence, OHScalingMode scalingMode) | Sets the scaling mode of the native window.|
| OH_NativeWindow_NativeWindowSetMetaData(OHNativeWindow \*window, uint32_t sequence, int32_t size, const OHHDRMetaData \*metaData) | Sets the HDR static metadata of the native window.|
| OH_NativeWindow_NativeWindowSetMetaDataSet(OHNativeWindow \*window, uint32_t sequence, OHHDRMetadataKey key, int32_t size, const uint8_t \*metaData) | Sets the HDR static metadata set of the native window.|
| OH_NativeWindow_NativeWindowSetTunnelHandle(OHNativeWindow \*window, const OHExtDataHandle \*handle) | Sets the tunnel handle to the native window.|
## How to Develop ## How to Develop
The following steps describe how to use `OH_NativeXComponent` in OpenHarmony to draw content using native C++ code and display the content on the screen. The following describes how to use the NAPI provided by **NativeWindow** to request a graphics buffer, write the produced graphics content to the buffer, and flush the buffer to the graphics queue.
1. Define an `XComponent` of the `texture` type in `index.ets` for content display.
```js
XComponent({ id: 'xcomponentId', type: 'texture', libraryname: 'nativerender'})
.borderColor(Color.Red)
.borderWidth(5)
.onLoad(() => {})
.onDestroy(() => {})
```
2. Obtain an `OH_NativeXComponent` instance (named `nativeXComponent` in this example) by calling `napi_get_named_property`, and obtain a `NativeWindow` instance by registering the callback of the `OH_NativeXComponent` instance.
1. Obtain a **NativeWindow** instance. For example, use **Surface** to create a **NativeWindow** instance.
```c++ ```c++
// Define a NAPI instance. sptr<OHOS::Surface> cSurface = Surface::CreateSurfaceAsConsumer();
napi_value exportInstance = nullptr; sptr<IBufferConsumerListener> listener = new BufferConsumerListenerTest();
// Define an OH_NativeXComponent instance. cSurface->RegisterConsumerListener(listener);
OH_NativeXComponent *nativeXComponent = nullptr; sptr<OHOS::IBufferProducer> producer = cSurface->GetProducer();
// Use the OH_NATIVE_XCOMPONENT_OBJ export instance. sptr<OHOS::Surface> pSurface = Surface::CreateSurfaceAsProducer(producer);
napi_getname_property(env, exports, OH_NATIVE_XCOMPONENT_OBJ, &exportInstance); OHNativeWindow* nativeWindow = OH_NativeWindow_CreateNativeWindow(&pSurface);
// Convert the NAPI instance to the OH_NativeXComponent instance.
napi_unwarp(env, exportInstance, reinterpret_cast<void**>(&nativeXComponent));
``` ```
3. Define the callback `OnSurfaceCreated`. During the creation of a `Surface`, the callback is used to initialize the rendering environment, for example, the `Skia` rendering environment, and write the content to be displayed to `NativeWindow`. 2. Set the attributes of a native window buffer by using **OH_NativeWindow_NativeWindowHandleOpt**.
```c++ ```c++
void OnSurfaceCreatedCB(NativeXComponent* component, void* window) { // Set the read and write scenarios of the native window buffer.
// Obtain the width and height of the native window. int code = SET_USAGE;
uint64_t width_ = 0, height_ = 0; int32_t usage = BUFFER_USAGE_CPU_READ | BUFFER_USAGE_CPU_WRITE | BUFFER_USAGE_MEM_DMA;
OH_NativeXComponent_GetXComponentSize(nativeXComponent, window, &width_, &height_); int32_t ret = OH_NativeWindow_NativeWindowHandleOpt(nativeWindow, code, usage);
// Convert void* into a NativeWindow instance. NativeWindow is defined in native_window/external_window.h. // Set the width and height of the native window buffer.
NativeWindow* nativeWindow_ = (NativeWindow*)(window); code = SET_BUFFER_GEOMETRY;
int32_t width = 0x100;
// Set or obtain the NativeWindow attributes by calling OH_NativeWindow_NativeWindowHandleOpt. int32_t height = 0x100;
// 1. Use SET_USAGE to set the usage attribute of the native window, for example, to HBM_USE_CPU_READ. ret = OH_NativeWindow_NativeWindowHandleOpt(nativeWindow, code, width, height);
OH_NativeWindow_NativeWindowHandleOpt(nativeWindow_, SET_USAGE, HBM_USE_CPU_READ | HBM_USE_CPU_WRITE |HBM_USE_MEM_DMA); // Set the step of the native window buffer.
// 2. Use SET_BUFFER_GEOMETRY to set the width and height attributes of the native window. code = SET_STRIDE;
OH_NativeWindow_NativeWindowHandleOpt(nativeWindow_, SET_BUFFER_GEOMETRY, width_, height_); int32_t stride = 0x8;
// 3. Use SET_FORMAT to set the format attribute of the native window, for example, to PIXEL_FMT_RGBA_8888. ret = OH_NativeWindow_NativeWindowHandleOpt(nativeWindow, code, stride);
OH_NativeWindow_NativeWindowHandleOpt(nativeWindow_, SET_FORMAT, PIXEL_FMT_RGBA_8888); // Set the format of the native window buffer.
// 4. Use SET_STRIDE to set the stride attribute of the native window. code = SET_FORMAT;
OH_NativeWindow_NativeWindowHandleOpt(nativeWindow_, SET_STRIDE, 0x8); int32_t format = PIXEL_FMT_RGBA_8888;
ret = OH_NativeWindow_NativeWindowHandleOpt(nativeWindow, code, format);
// Obtain the NativeWindowBuffer instance by calling OH_NativeWindow_NativeWindowRequestBuffer. ```
struct NativeWindowBuffer* buffer = nullptr;
int fenceFd;
OH_NativeWindow_NativeWindowRequestBuffer(nativeWindow_, &buffer, &fenceFd);
// Obtain the buffer handle by calling OH_NativeWindow_GetNativeBufferHandleFromNative.
BufferHandle* bufferHandle = OH_NativeWindow_GetNativeBufferHandleFromNative(buffer);
// Create a Skia bitmap using BufferHandle. 3. Request a native window buffer from the graphics queue.
SkBitmap bitmap; ```c++
SkImageInfo imageInfo = ... struct NativeWindowBuffer* buffer = nullptr;
bitmap.setInfo(imageInfo, bufferHandle->stride); int fenceFd;
bitmap.setPixels(bufferHandle->virAddr); // Obtain the NativeWindowBuffer instance by calling OH_NativeWindow_NativeWindowRequestBuffer.
// Create Skia Canvas and write the content to the native window. OH_NativeWindow_NativeWindowRequestBuffer(nativeWindow_, &buffer, &fenceFd);
... // Obtain the buffer handle by calling OH_NativeWindow_GetNativeBufferHandleFromNative.
BufferHandle* bufferHandle = OH_NativeWindow_GetNativeBufferHandleFromNative(buffer);
```
// After the write operation is complete, flush the buffer by using OH_NativeWindow_NativeWindowFlushBuffer so that the data is displayed on the screen. 4. Write the produced content to the native window buffer.
Region region{nullptr, 0}; ```c++
OH_NativeWindow_NativeWindowFlushBuffer(nativeWindow_, buffer, fenceFd, region) auto image = static_cast<uint8_t *>(buffer->sfbuffer->GetVirAddr());
static uint32_t value = 0x00;
value++;
uint32_t *pixel = static_cast<uint32_t *>(image);
for (uint32_t x = 0; x < width; x++) {
for (uint32_t y = 0; y < height; y++) {
*pixel++ = value;
}
} }
``` ```
4. Register the callback `OnSurfaceCreated` by using `OH_NativeXComponent_RegisterCallback`. 5. Flush the native window buffer to the graphics queue.
```c++ ```c++
OH_NativeXComponent_Callback &callback_; // Set the refresh region. If Rect in Region is a null pointer or rectNumber is 0, all contents in the native window buffer are changed.
callback_->OnSurfaceCreated = OnSurfaceCreatedCB; Region region{nullptr, 0};
callback_->OnSurfaceChanged = OnSurfaceChangedCB; // Flush the buffer to the consumer through OH_NativeWindow_NativeWindowFlushBuffer, for example, by displaying it on the screen.
callback_->OnSurfaceDestoryed = OnSurfaceDestoryedCB; OH_NativeWindow_NativeWindowFlushBuffer(nativeWindow_, buffer, fenceFd, region);
callback_->DispatchTouchEvent = DispatchTouchEventCB;
OH_NativeXComponent_RegisterCallback(nativeXComponent, callback_)
``` ```
...@@ -2,11 +2,12 @@ ...@@ -2,11 +2,12 @@
- Getting Started - Getting Started
- [Preparations](start-overview.md) - [Preparations](start-overview.md)
- [Getting Started with eTS in Stage Model](start-with-ets-stage.md) - [Getting Started with ArkTS in Stage Model](start-with-ets-stage.md)
- [Getting Started with eTS in FA Model](start-with-ets-fa.md) - [Getting Started with ArkTS in FA Model](start-with-ets-fa.md)
- [Getting Started with JavaScript in FA Model](start-with-js-fa.md) - [Getting Started with JavaScript in FA Model](start-with-js-fa.md)
- Development Fundamentals - Development Fundamentals
- [Application Package Structure Configuration File (FA Model)](package-structure.md) - [Application Package Structure Configuration File (FA Model)](package-structure.md)
- [Application Package Structure Configuration File (Stage Model)](stage-structure.md) - [Application Package Structure Configuration File (Stage Model)](stage-structure.md)
- [SysCap](syscap.md) - [SysCap](syscap.md)
- [HarmonyAppProvision Configuration File](app-provision-structure.md) - [HarmonyAppProvision Configuration File](app-provision-structure.md)
# Rendering Control
ArkTS provides conditional rendering and loop rendering. Conditional rendering can render state-specific UI content based on the application status. Loop rendering iteratively obtains data from the data source and creates the corresponding component during each iteration.
## Conditional Rendering
Use **if/else** for conditional rendering.
> **NOTE**
>
> - State variables can be used in the **if/else** statement.
>
> - The **if/else** statement can be used to implement rendering of child components.
>
> - The **if/else** statement must be used in container components.
>
> - Some container components limit the type or number of subcomponents. When **if/else** is placed in these components, the limitation applies to components created in **if/else** statements. For example, when **if/else** is used in the **\<Grid>** container component, whose child components can only be **\<GridItem>**, only the **\<GridItem>** component can be used in the **if/else** statement.
```ts
Column() {
if (this.count < 0) {
Text('count is negative').fontSize(14)
} else if (this.count % 2 === 0) {
Text('count is even').fontSize(14)
} else {
Text('count is odd').fontSize(14)
}
}
```
## Loop Rendering
You can use **ForEach** to obtain data from arrays and create components for each data item.
```
ForEach(
arr: any[],
itemGenerator: (item: any, index?: number) => void,
keyGenerator?: (item: any, index?: number) => string
)
```
**Parameters**
| Name | Type | Mandatory| Description |
| ------------- | ------------------------------------- | ---- | ------------------------------------------------------------ |
| arr | any[] | Yes | An array, which can be empty, in which case no child component is created. The functions that return array-type values are also allowed, for example, **arr.slice (1, 3)**. The set functions cannot change any state variables including the array itself, such as **Array.splice**, **Array.sort**, and **Array.reverse**.|
| itemGenerator | (item: any, index?: number) => void | Yes | A lambda function used to generate one or more child components for each data item in an array. A single child component or a list of child components must be included in parentheses.|
| keyGenerator | (item: any, index?: number) => string | No | An anonymous function used to generate a unique and fixed key value for each data item in an array. This key value must remain unchanged for the data item even when the item is relocated in the array. When the item is replaced by a new item, the key value of the new item must be different from that of the existing item. This key-value generator is optional. However, for performance reasons, it is strongly recommended that the key-value generator be provided, so that the development framework can better identify array changes. For example, if no key-value generator is provided, a reverse of an array will result in rebuilding of all nodes in **ForEach**.|
> **NOTE**
>
> - **ForEach** must be used in container components.
>
> - The generated child components should be allowed in the parent container component of **ForEach**.
>
> - The **itemGenerator** function can contain an **if/else** statement, and an **if/else** statement can contain **ForEach**.
>
> - The call sequence of **itemGenerator** functions may be different from that of the data items in the array. During the development, do not assume whether or when the **itemGenerator** and **keyGenerator** functions are executed. The following is an example of incorrect usage:
>
> ```ts
> ForEach(anArray.map((item1, index1) => { return { i: index1 + 1, data: item1 }; }),
> item => Text(`${item.i}. item.data.label`),
> item => item.data.id.toString())
> ```
## Example
```ts
// xxx.ets
@Entry
@Component
struct MyComponent {
@State arr: number[] = [10, 20, 30]
build() {
Column({ space: 5 }) {
Button('Reverse Array')
.onClick(() => {
this.arr.reverse()
})
ForEach(this.arr, (item: number) => {
Text(`item value: ${item}`).fontSize(18)
Divider().strokeWidth(2)
}, (item: number) => item.toString())
}
}
}
```
![forEach1](figures/forEach1.gif)
## Lazy Loading
You can use **LazyForEach** to iterate over provided data sources and create corresponding components during each iteration.
```ts
LazyForEach(
dataSource: IDataSource,
itemGenerator: (item: any) => void,
keyGenerator?: (item: any) => string
): void
interface IDataSource {
totalCount(): number;
getData(index: number): any;
registerDataChangeListener(listener: DataChangeListener): void;
unregisterDataChangeListener(listener: DataChangeListener): void;
}
interface DataChangeListener {
onDataReloaded(): void;
onDataAdd(index: number): void;
onDataMove(from: number, to: number): void;
onDataDelete(index: number): void;
onDataChange(index: number): void;
}
```
**Parameters**
| Name | Type | Mandatory| Description |
| ------------- | --------------------- | ---- | ------------------------------------------------------------ |
| dataSource | IDataSource | Yes | Object used to implement the **IDataSource** API. You need to implement related APIs. |
| itemGenerator | (item: any) => void | Yes | A lambda function used to generate one or more child components for each data item in an array. A single child component or a list of child components must be included in parentheses.|
| keyGenerator | (item: any) => string | No | An anonymous function used to generate a unique and fixed key value for each data item in an array. This key value must remain unchanged for the data item even when the item is relocated in the array. When the item is replaced by a new item, the key value of the new item must be different from that of the existing item. This key-value generator is optional. However, for performance reasons, it is strongly recommended that the key-value generator be provided, so that the development framework can better identify array changes. For example, if no key-value generator is provided, a reverse of an array will result in rebuilding of all nodes in **LazyForEach**.|
### Description of IDataSource
| Name | Description |
| ------------------------------------------------------------ | ---------------------- |
| totalCount(): number | Obtains the total number of data records. |
| getData(index: number): any | Obtains the data corresponding to the specified index. |
| registerDataChangeListener(listener:DataChangeListener): void | Registers a listener for data changes.|
| unregisterDataChangeListener(listener:DataChangeListener): void | Deregisters a listener for data changes.|
### Description of DataChangeListener
| Name | Description |
| -------------------------------------------------------- | -------------------------------------- |
| onDataReloaded(): void | Invoked when all data is reloaded. |
| onDataAdded(index: number): void (deprecated) | Invoked when data is added to the position indicated by the specified index. |
| onDataMoved(from: number, to: number): void (deprecated) | Invoked when data is moved from the **from** position to the **to** position.|
| onDataDeleted(index: number): void (deprecated) | Invoked when data is deleted from the position indicated by the specified index. |
| onDataChanged(index: number): void (deprecated) | Invoked when data in the position indicated by the specified index is changed. |
| onDataAdd(index: number): void 8+ | Invoked when data is added to the position indicated by the specified index. |
| onDataMove(from: number, to: number): void 8+ | Invoked when data is moved from the **from** position to the **to** position.|
| onDataDelete(index: number): void 8+ | Invoked when data is deleted from the position indicated by the specified index. |
| onDataChange(index: number): void 8+ | Invoked when data in the position indicated by the specified index is changed. |
## Example
```ts
// xxx.ets
class BasicDataSource implements IDataSource {
private listeners: DataChangeListener[] = []
public totalCount(): number {
return 0
}
public getData(index: number): any {
return undefined
}
registerDataChangeListener(listener: DataChangeListener): void {
if (this.listeners.indexOf(listener) < 0) {
console.info('add listener')
this.listeners.push(listener)
}
}
unregisterDataChangeListener(listener: DataChangeListener): void {
const pos = this.listeners.indexOf(listener);
if (pos >= 0) {
console.info('remove listener')
this.listeners.splice(pos, 1)
}
}
notifyDataReload(): void {
this.listeners.forEach(listener => {
listener.onDataReloaded()
})
}
notifyDataAdd(index: number): void {
this.listeners.forEach(listener => {
listener.onDataAdd(index)
})
}
notifyDataChange(index: number): void {
this.listeners.forEach(listener => {
listener.onDataChange(index)
})
}
notifyDataDelete(index: number): void {
this.listeners.forEach(listener => {
listener.onDataDelete(index)
})
}
notifyDataMove(from: number, to: number): void {
this.listeners.forEach(listener => {
listener.onDataMove(from, to)
})
}
}
class MyDataSource extends BasicDataSource {
// Initialize the data list.
private dataArray: string[] = ['/path/image0.png', '/path/image1.png', '/path/image2.png', '/path/image3.png']
public totalCount(): number {
return this.dataArray.length
}
public getData(index: number): any {
return this.dataArray[index]
}
public addData(index: number, data: string): void {
this.dataArray.splice(index, 0, data)
this.notifyDataAdd(index)
}
public pushData(data: string): void {
this.dataArray.push(data)
this.notifyDataAdd(this.dataArray.length - 1)
}
}
@Entry
@Component
struct MyComponent {
private data: MyDataSource = new MyDataSource()
build() {
List({ space: 3 }) {
LazyForEach(this.data, (item: string) => {
ListItem() {
Row() {
Image(item).width(50).height(50)
Text(item).fontSize(20).margin({ left: 10 })
}.margin({ left: 10, right: 10 })
}
.onClick(() => {
// The count increases by one each time the list is clicked.
this.data.pushData('/path/image' + this.data.totalCount() + '.png')
})
}, item => item)
}
}
}
```
> **NOTE**
>
> - **LazyForEach** must be used in the container component. Currently, only the **\<List>**, **\<Grid>**, and **\<Swiper>** components support lazy loading (that is, only the visible part and a small amount of data before and after the visible part are loaded for caching). For other components, all data is loaded at a time.
>
> - **LazyForEach** must create and only one child component in each iteration.
>
> - The generated child components must be allowed in the parent container component of **LazyForEach**.
>
> - **LazyForEach** can be included in an **if/else** statement, but cannot contain such a statement.
>
> - For the purpose of high-performance rendering, when the **onDataChange** method of the **DataChangeListener** object is used to update the UI, the component update is triggered only when the state variable is used in the child component created by **itemGenerator**.
>
> - The call sequence of **itemGenerator** functions may be different from that of the data items in the data source. During the development, do not assume whether or when the **itemGenerator** and **keyGenerator** functions are executed. The following is an example of incorrect usage:
>
> ```ts
> LazyForEach(dataSource,
> item => Text(`${item.i}. item.data.label`)),
> item => item.data.id.toString())
> ```
![lazyForEach](figures/lazyForEach.gif)
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册