“988a4a66bd71089b63af88d5c60e1f3763b12ab0”上不存在“go/master/git@gitcode.net:s920243400/PaddleDetection.git”
提交 c9d1cd1b 编写于 作者: Y yudechen

Merge https://gitee.com/openharmony/docs into new

Change-Id: I400fd2c2841c52400b9806ccd75596be84280ebf

要显示的变更太多。

To preserve performance only 1000 of 1000+ files are displayed.
...@@ -155,9 +155,9 @@ zh-cn/application-dev/work-scheduler/ @HelloCrease ...@@ -155,9 +155,9 @@ zh-cn/application-dev/work-scheduler/ @HelloCrease
zh-cn/application-dev/internationalization/ @HelloCrease zh-cn/application-dev/internationalization/ @HelloCrease
zh-cn/application-dev/device/usb-overview.md @ge-yafang zh-cn/application-dev/device/usb-overview.md @ge-yafang
zh-cn/application-dev/device/usb-guidelines.md @ge-yafang zh-cn/application-dev/device/usb-guidelines.md @ge-yafang
zh-cn/application-dev/device/device-location-overview.md @zengyawen zh-cn/application-dev/device/device-location-overview.md @RayShih
zh-cn/application-dev/device/device-location-info.md @zengyawen zh-cn/application-dev/device/device-location-info.md @RayShih
zh-cn/application-dev/device/device-location-geocoding.md @zengyawen zh-cn/application-dev/device/device-location-geocoding.md @RayShih
zh-cn/application-dev/device/sensor-overview.md @HelloCrease zh-cn/application-dev/device/sensor-overview.md @HelloCrease
zh-cn/application-dev/device/sensor-guidelines.md @HelloCrease zh-cn/application-dev/device/sensor-guidelines.md @HelloCrease
zh-cn/application-dev/device/vibrator-overview.md @HelloCrease zh-cn/application-dev/device/vibrator-overview.md @HelloCrease
...@@ -181,7 +181,7 @@ zh-cn/application-dev/napi/drawing-guidelines.md @ge-yafang ...@@ -181,7 +181,7 @@ zh-cn/application-dev/napi/drawing-guidelines.md @ge-yafang
zh-cn/application-dev/napi/rawfile-guidelines.md @HelloCrease zh-cn/application-dev/napi/rawfile-guidelines.md @HelloCrease
zh-cn/application-dev/reference/js-service-widget-ui/ @HelloCrease zh-cn/application-dev/reference/js-service-widget-ui/ @HelloCrease
zh-cn/application-dev/faqs/ @zengyawen zh-cn/application-dev/faqs/ @zengyawen
zh-cn/application-dev/file-management/ @qinxiaowang zh-cn/application-dev/file-management/ @zengyawen
zh-cn/application-dev/application-test/ @HelloCrease zh-cn/application-dev/application-test/ @HelloCrease
zh-cn/application-dev/device-usage-statistics/ @HelloCrease zh-cn/application-dev/device-usage-statistics/ @HelloCrease
...@@ -212,7 +212,7 @@ zh-cn/application-dev/reference/apis/js-apis-audio.md @zengyawen ...@@ -212,7 +212,7 @@ zh-cn/application-dev/reference/apis/js-apis-audio.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-camera.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-camera.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-image.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-image.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-media.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-media.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-medialibrary.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-medialibrary.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-i18n.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-i18n.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-intl.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-intl.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-resource-manager.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-resource-manager.md @HelloCrease
...@@ -239,13 +239,13 @@ zh-cn/application-dev/reference/apis/js-apis-system-storage.md @ge-yafang ...@@ -239,13 +239,13 @@ zh-cn/application-dev/reference/apis/js-apis-system-storage.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-data-rdb.md @ge-yafang zh-cn/application-dev/reference/apis/js-apis-data-rdb.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-settings.md @ge-yafang zh-cn/application-dev/reference/apis/js-apis-settings.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-data-resultset.md @ge-yafang zh-cn/application-dev/reference/apis/js-apis-data-resultset.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-document.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-document.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-environment.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-environment.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-fileio.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-fileio.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-filemanager.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-filemanager.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-statfs.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-statfs.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-storage-statistics.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-storage-statistics.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-volumemanager.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-volumemanager.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-contact.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-contact.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-call.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-call.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-observer.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-observer.md @zengyawen
...@@ -302,38 +302,38 @@ zh-cn/application-dev/reference/apis/js-apis-vibrator.md @HelloCrease ...@@ -302,38 +302,38 @@ zh-cn/application-dev/reference/apis/js-apis-vibrator.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-appAccount.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-appAccount.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-distributed-account.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-distributed-account.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-osAccount.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-osAccount.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-convertxml.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-convertxml.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-process.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-process.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-uri.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-uri.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-url.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-url.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-util.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-util.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-arraylist.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-arraylist.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-deque.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-deque.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-hashmap.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-hashmap.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-hashset.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-hashset.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-lightweightmap.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-lightweightmap.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-lightweightset.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-lightweightset.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-linkedlist.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-linkedlist.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-list.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-list.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-plainarray.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-plainarray.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-queue.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-queue.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-stack.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-stack.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-treemap.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-treemap.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-treeset.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-treeset.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-vector.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-vector.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-worker.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-worker.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-xml.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-xml.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-testRunner.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-testRunner.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-resourceschedule-backgroundTaskManager.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-uitest.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-uitest.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-hisysevent.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-hisysevent.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-privacyManager.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-privacyManager.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-EnterpriseAdminExtensionAbility.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-EnterpriseAdminExtensionAbility.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-animator.md @HelloCrease @qieqiewl @tomatodevboy @niulihua zh-cn/application-dev/reference/apis/js-apis-animator.md @HelloCrease @qieqiewl @tomatodevboy @niulihua
zh-cn/application-dev/reference/apis/js-apis-uiappearance.md @HelloCrease @qieqiewl @tomatodevboy @niulihua
zh-cn/application-dev/reference/apis/js-apis-useriam-faceauth.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-useriam-faceauth.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-userfilemanager.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-userfilemanager.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-cryptoFramework.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-cryptoFramework.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-buffer.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-buffer.md @ge-yafang
zh-cn/application-dev/reference/apis/development-intro.md @zengyawen zh-cn/application-dev/reference/apis/development-intro.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-accessibility-extension-context.md @RayShih zh-cn/application-dev/reference/apis/js-apis-accessibility-extension-context.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-application-applicationContext.md @RayShih zh-cn/application-dev/reference/apis/js-apis-application-applicationContext.md @RayShih
...@@ -366,7 +366,7 @@ zh-cn/application-dev/reference/apis/js-apis-mouseevent.md @HelloCrease ...@@ -366,7 +366,7 @@ zh-cn/application-dev/reference/apis/js-apis-mouseevent.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-nfcController.md @RayShih zh-cn/application-dev/reference/apis/js-apis-nfcController.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-nfctech.md @RayShih zh-cn/application-dev/reference/apis/js-apis-nfctech.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-pointer.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-pointer.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-securityLabel.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-securityLabel.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-app.md @RayShih @shuaytao @wangzhen107 @inter515 zh-cn/application-dev/reference/apis/js-apis-system-app.md @RayShih @shuaytao @wangzhen107 @inter515
zh-cn/application-dev/reference/apis/js-apis-system-battery.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-system-battery.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-bluetooth.md @RayShih zh-cn/application-dev/reference/apis/js-apis-system-bluetooth.md @RayShih
...@@ -374,7 +374,7 @@ zh-cn/application-dev/reference/apis/js-apis-system-brightness.md @zengyawen ...@@ -374,7 +374,7 @@ zh-cn/application-dev/reference/apis/js-apis-system-brightness.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-configuration.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-system-configuration.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-system-device.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-system-device.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-fetch.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-system-fetch.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-file.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-system-file.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-system-location.md @RayShih zh-cn/application-dev/reference/apis/js-apis-system-location.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-system-mediaquery.md @HelloCrease @qieqiewl @tomatodevboy @niulihua zh-cn/application-dev/reference/apis/js-apis-system-mediaquery.md @HelloCrease @qieqiewl @tomatodevboy @niulihua
zh-cn/application-dev/reference/apis/js-apis-system-network.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-system-network.md @zengyawen
...@@ -390,8 +390,8 @@ zh-cn/application-dev/reference/apis/js-apis-accessibility-config.md @RayShih ...@@ -390,8 +390,8 @@ zh-cn/application-dev/reference/apis/js-apis-accessibility-config.md @RayShih
zh-cn/application-dev/reference/apis/js-apis-Bundle-BundleStatusCallback.md @RayShih @shuaytao @wangzhen107 @inter515 zh-cn/application-dev/reference/apis/js-apis-Bundle-BundleStatusCallback.md @RayShih @shuaytao @wangzhen107 @inter515
zh-cn/application-dev/reference/apis/js-apis-bundle-PackInfo.md @RayShih @shuaytao @wangzhen107 @inter515 zh-cn/application-dev/reference/apis/js-apis-bundle-PackInfo.md @RayShih @shuaytao @wangzhen107 @inter515
zh-cn/application-dev/reference/apis/js-apis-enterpriseDeviceManager-DeviceSettingsManager.md @HelloCrease zh-cn/application-dev/reference/apis/js-apis-enterpriseDeviceManager-DeviceSettingsManager.md @HelloCrease
zh-cn/application-dev/reference/apis/js-apis-fileAccess.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-fileAccess.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-fileExtensionInfo.md @qinxiaowang zh-cn/application-dev/reference/apis/js-apis-fileExtensionInfo.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-net-ethernet.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-net-ethernet.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-net-policy.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-net-policy.md @zengyawen
zh-cn/application-dev/reference/apis/js-apis-net-sharing.md @zengyawen zh-cn/application-dev/reference/apis/js-apis-net-sharing.md @zengyawen
...@@ -450,16 +450,6 @@ zh-cn/application-dev/reference/apis/js-apis-formprovider.md @RayShih @littlejer ...@@ -450,16 +450,6 @@ zh-cn/application-dev/reference/apis/js-apis-formprovider.md @RayShih @littlejer
zh-cn/application-dev/reference/apis/js-apis-inputmethod-extension-ability.md @ge-yafang zh-cn/application-dev/reference/apis/js-apis-inputmethod-extension-ability.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-inputmethod-extension-context.md @ge-yafang zh-cn/application-dev/reference/apis/js-apis-inputmethod-extension-context.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-inputmethod-subtype.md @ge-yafang zh-cn/application-dev/reference/apis/js-apis-inputmethod-subtype.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-inputmethod-framework.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-usb.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-datashare.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-colorspace-manager.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-display.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-distributed-data_object.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-distributedKVStore.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-pasteboard.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-preferences.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errorcode-window.md @ge-yafang
zh-cn/application-dev/reference/apis/js-apis-application-quickFixManager.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen zh-cn/application-dev/reference/apis/js-apis-application-quickFixManager.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/apis/js-apis-missionManager.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen zh-cn/application-dev/reference/apis/js-apis-missionManager.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/apis/js-apis-particleAbility.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen zh-cn/application-dev/reference/apis/js-apis-particleAbility.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
...@@ -470,3 +460,62 @@ zh-cn/application-dev/reference/apis/js-apis-service-extension-ability.md @RaySh ...@@ -470,3 +460,62 @@ zh-cn/application-dev/reference/apis/js-apis-service-extension-ability.md @RaySh
zh-cn/application-dev/reference/apis/js-apis-service-extension-context.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen zh-cn/application-dev/reference/apis/js-apis-service-extension-context.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/apis/js-apis-wantAgent.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen zh-cn/application-dev/reference/apis/js-apis-wantAgent.md @RayShih @littlejerry1 @gwang2008 @ccllee @chengxingzhen
zh-cn/application-dev/reference/errorcodes/errcode-ability.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-access-token.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-accessibility.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-account.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-animator.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-app-account.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-audio.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-avsession.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-backgroundTaskMgr.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-batteryStatistics.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-brightness.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-buffer.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-bundle.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-colorspace-manager.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-CommonEventService.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-containers.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-data-rdb.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-datashare.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-device-manager.md @qinxiaowang
zh-cn/application-dev/reference/errorcodes/errcode-DeviceUsageStatistics.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-display.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-distributed-dataObject.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-distributedKVStore.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-DistributedNotificationService.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-DistributedSchedule.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-enterpriseDeviceManager.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-faultlogger.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-filemanagement.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-geoLocationManager.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-hiappevent.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-hisysevent.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-hiviewdfx-hidebug.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-huks.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-i18n.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-inputmethod-framework.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-multimodalinput.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-nfc.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-pasteboard.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-power.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-preferences.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-promptAction.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-reminderAgentManager.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-request.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-resource-manager.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-router.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-rpc.md @qinxiaowang
zh-cn/application-dev/reference/errorcodes/errcode-runninglock.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-sensor.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-system-parameterV9.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-thermal.md @zengyawen
zh-cn/application-dev/reference/errorcodes/errcode-uitest.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-universal.md @RayShih
zh-cn/application-dev/reference/errorcodes/errcode-update.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-usb.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-vibrator.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-webview.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-window.md @ge-yafang
zh-cn/application-dev/reference/errorcodes/errcode-workScheduler.md @HelloCrease
zh-cn/application-dev/reference/errorcodes/errcode-zlib.md @RayShih
# Legal Notices
**Copyright (c) 2020-2022 OpenAtom OpenHarmony. All rights reserved.**
## Copyright
All copyrights of the OpenHarmony documents are reserved by OpenAtom OpenHarmony.
The OpenHarmony documents are licensed under Creative Commons Attribution 4.0 International (CC BY 4.0). For easier understanding, you can visit [Creative Commons](https://creativecommons.org/licenses/by/4.0/) to get a human-readable summary of the license. For the complete content, see [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/legalcode).
## Trademarks and Permissions
No content provided in the OpenHarmony documentation shall be deemed as a grant of the approval or right to use any trademark, name, or logo of the OpenAtom Foundation and OpenAtom OpenHarmony. No third parties shall use any of the aforementioned trademarks, names, or logos in any way without explicit prior written permission of the OpenAtom Foundation.
## Disclaimer
The information in the OpenHarmony documents is subject to change without notice.
The OpenHarmony documents are provided without any express or implied warranty. In any case, the OpenAtom Foundation or the copyright owner is not liable for any direct or indirect loss arising from the use of the OpenHarmony documents, regardless of the cause or legal theory, even if the OpenHarmony documents have stated that there is a possibility of such loss.
<!--no_check-->
\ No newline at end of file
...@@ -150,7 +150,7 @@ Currently, the OpenHarmony community supports 17 types of development boards, wh ...@@ -150,7 +150,7 @@ Currently, the OpenHarmony community supports 17 types of development boards, wh
| System Type| Board Model| Chip Model| <div style="width:200pt">Function Description and Use Case</div> | Application Scenario| Code Repository | | System Type| Board Model| Chip Model| <div style="width:200pt">Function Description and Use Case</div> | Application Scenario| Code Repository |
| -------- | -------- | -------- | -------- | -------- | -------- | | -------- | -------- | -------- | -------- | -------- | -------- |
| Standard system| Runhe HH-SCDAYU200| RK3568 | <div style="width:200pt">Function description:<br>Bolstered by the Rockchip RK3568, the HH-SCDAYU200 development board integrates the dual-core GPU and efficient NPU. Its quad-core 64-bit Cortex-A55 processor uses the advanced 22 nm fabrication process and is clocked at up to 2.0 GHz. The board is packed with Bluetooth, Wi-Fi, audio, video, and camera features, with a wide range of expansion ports to accommodate various video input and outputs. It comes with dual GE auto-sensing RJ45 ports, so it can be used in multi-connectivity products, such as network video recorders (NVRs) and industrial gateways.<br>Use case:<br>[DAYU200 Use Case](device-dev/porting/porting-dayu200-on_standard-demo.md)</div> | Entertainment, easy travel, and smart home, such as kitchen hoods, ovens, and treadmills.| [device_soc_rockchip](https://gitee.com/openharmony/device_soc_rockchip)<br>[device_board_hihope](https://gitee.com/openharmony/device_board_hihope)<br>[vendor_hihope](https://gitee.com/openharmony/vendor_hihope) <br> | | Standard system| Runhe HH-SCDAYU200| RK3568 | <div style="width:200pt">Function description:<br>Bolstered by the Rockchip RK3568, the HH-SCDAYU200 development board integrates the dual-core GPU and efficient NPU. Its quad-core 64-bit Cortex-A55 processor uses the advanced 22 nm fabrication process and is clocked at up to 2.0 GHz. The board is packed with Bluetooth, Wi-Fi, audio, video, and camera features, with a wide range of expansion ports to accommodate various video input and outputs. It comes with dual GE auto-sensing RJ45 ports, so it can be used in multi-connectivity products, such as network video recorders (NVRs) and industrial gateways.</div> | Entertainment, easy travel, and smart home, such as kitchen hoods, ovens, and treadmills.| [device_soc_rockchip](https://gitee.com/openharmony/device_soc_rockchip)<br>[device_board_hihope](https://gitee.com/openharmony/device_board_hihope)<br>[vendor_hihope](https://gitee.com/openharmony/vendor_hihope) <br> |
| Small system| Hispark_Taurus | Hi3516DV300 | <div style="width:200pt">Function Description:<br>Hi3516D V300 is the next-generation system on chip (SoC) for smart HD IP cameras. It integrates the next-generation image signal processor (ISP), H.265 video compression encoder, and high-performance NNIE engine, and delivers high performance in terms of low bit rate, high image quality, intelligent processing and analysis, and low power consumption.</div> | Smart device with screens, such as refrigerators with screens and head units.| [device_soc_hisilicon](https://gitee.com/openharmony/device_soc_hisilicon)<br>[device_board_hisilicon](https://gitee.com/openharmony/device_board_hisilicon)<br>[vendor_hisilicon](https://gitee.com/openharmony/vendor_hisilicon) <br> | | Small system| Hispark_Taurus | Hi3516DV300 | <div style="width:200pt">Function Description:<br>Hi3516D V300 is the next-generation system on chip (SoC) for smart HD IP cameras. It integrates the next-generation image signal processor (ISP), H.265 video compression encoder, and high-performance NNIE engine, and delivers high performance in terms of low bit rate, high image quality, intelligent processing and analysis, and low power consumption.</div> | Smart device with screens, such as refrigerators with screens and head units.| [device_soc_hisilicon](https://gitee.com/openharmony/device_soc_hisilicon)<br>[device_board_hisilicon](https://gitee.com/openharmony/device_board_hisilicon)<br>[vendor_hisilicon](https://gitee.com/openharmony/vendor_hisilicon) <br> |
| Mini system| Multi-modal V200Z-R | BES2600 | <div style="width:200pt">Function description:<br>The multi-modal V200Z-R development board is a high-performance, multi-functional, and cost-effective AIoT SoC powered by the BES2600WM chip of Bestechnic. It integrates a quad-core ARM processor with a frequency of up to 1 GHz as well as dual-mode Wi-Fi and dual-mode Bluetooth. The board supports the 802.11 a/b/g/n/ and BT/BLE 5.2 standards. It is able to accommodate RAM of up to 42 MB and flash memory of up to 32 MB, and supports the MIPI display serial interface (DSI) and camera serial interface (CSI). It is applicable to various AIoT multi-modal VUI and GUI interaction scenarios.<br>Use case:<br>[Multi-modal V200Z-R Use Case](device-dev/porting/porting-bes2600w-on-minisystem-display-demo.md)</div> | Smart hardware, and smart devices with screens, such as speakers and watches.| [device_soc_bestechnic](https://gitee.com/openharmony/device_soc_bestechnic)<br>[device_board_fnlink](https://gitee.com/openharmony/device_board_fnlink)<br>[vendor_bestechnic](https://gitee.com/openharmony/vendor_bestechnic) <br> | | Mini system| Multi-modal V200Z-R | BES2600 | <div style="width:200pt">Function description:<br>The multi-modal V200Z-R development board is a high-performance, multi-functional, and cost-effective AIoT SoC powered by the BES2600WM chip of Bestechnic. It integrates a quad-core ARM processor with a frequency of up to 1 GHz as well as dual-mode Wi-Fi and dual-mode Bluetooth. The board supports the 802.11 a/b/g/n/ and BT/BLE 5.2 standards. It is able to accommodate RAM of up to 42 MB and flash memory of up to 32 MB, and supports the MIPI display serial interface (DSI) and camera serial interface (CSI). It is applicable to various AIoT multi-modal VUI and GUI interaction scenarios.<br>Use case:<br>[Multi-modal V200Z-R Use Case](device-dev/porting/porting-bes2600w-on-minisystem-display-demo.md)</div> | Smart hardware, and smart devices with screens, such as speakers and watches.| [device_soc_bestechnic](https://gitee.com/openharmony/device_soc_bestechnic)<br>[device_board_fnlink](https://gitee.com/openharmony/device_board_fnlink)<br>[vendor_bestechnic](https://gitee.com/openharmony/vendor_bestechnic) <br> |
......
...@@ -7,7 +7,7 @@ To ensure successful communications between the client and server, interfaces re ...@@ -7,7 +7,7 @@ To ensure successful communications between the client and server, interfaces re
![IDL-interface-description](./figures/IDL-interface-description.png) ![IDL-interface-description](./figures/IDL-interface-description.png)
IDL provides the following functions: **IDL provides the following functions:**
- Declares interfaces provided by system services for external systems, and based on the interface declaration, generates C, C++, JS, or TS code for inter-process communication (IPC) or remote procedure call (RPC) proxies and stubs during compilation. - Declares interfaces provided by system services for external systems, and based on the interface declaration, generates C, C++, JS, or TS code for inter-process communication (IPC) or remote procedure call (RPC) proxies and stubs during compilation.
...@@ -17,7 +17,7 @@ IDL provides the following functions: ...@@ -17,7 +17,7 @@ IDL provides the following functions:
![IPC-RPC-communication-model](./figures/IPC-RPC-communication-model.png) ![IPC-RPC-communication-model](./figures/IPC-RPC-communication-model.png)
IDL has the following advantages: **IDL has the following advantages:**
- Services are defined in the form of interfaces in IDL. Therefore, you do not need to focus on implementation details. - Services are defined in the form of interfaces in IDL. Therefore, you do not need to focus on implementation details.
...@@ -433,7 +433,7 @@ export default { ...@@ -433,7 +433,7 @@ export default {
console.log('ServiceAbility want:' + JSON.stringify(want)); console.log('ServiceAbility want:' + JSON.stringify(want));
console.log('ServiceAbility want name:' + want.bundleName) console.log('ServiceAbility want name:' + want.bundleName)
} catch(err) { } catch(err) {
console.log("ServiceAbility error:" + err) console.log('ServiceAbility error:' + err)
} }
console.info('ServiceAbility onConnect end'); console.info('ServiceAbility onConnect end');
return new IdlTestImp('connect'); return new IdlTestImp('connect');
...@@ -455,13 +455,13 @@ import featureAbility from '@ohos.ability.featureAbility'; ...@@ -455,13 +455,13 @@ import featureAbility from '@ohos.ability.featureAbility';
function callbackTestIntTransaction(result: number, ret: number): void { function callbackTestIntTransaction(result: number, ret: number): void {
if (result == 0 && ret == 124) { if (result == 0 && ret == 124) {
console.log("case 1 success "); console.log('case 1 success');
} }
} }
function callbackTestStringTransaction(result: number): void { function callbackTestStringTransaction(result: number): void {
if (result == 0) { if (result == 0) {
console.log("case 2 success "); console.log('case 2 success');
} }
} }
...@@ -472,17 +472,17 @@ var onAbilityConnectDone = { ...@@ -472,17 +472,17 @@ var onAbilityConnectDone = {
testProxy.testStringTransaction('hello', callbackTestStringTransaction); testProxy.testStringTransaction('hello', callbackTestStringTransaction);
}, },
onDisconnect:function (elementName) { onDisconnect:function (elementName) {
console.log("onDisconnectService onDisconnect"); console.log('onDisconnectService onDisconnect');
}, },
onFailed:function (code) { onFailed:function (code) {
console.log("onDisconnectService onFailed"); console.log('onDisconnectService onFailed');
} }
}; };
function connectAbility: void { function connectAbility: void {
let want = { let want = {
"bundleName":"com.example.myapplicationidl", bundleName: 'com.example.myapplicationidl',
"abilityName": "com.example.myapplicationidl.ServiceAbility" abilityName: 'com.example.myapplicationidl.ServiceAbility'
}; };
let connectionId = -1; let connectionId = -1;
connectionId = featureAbility.connectAbility(want, onAbilityConnectDone); connectionId = featureAbility.connectAbility(want, onAbilityConnectDone);
...@@ -495,7 +495,7 @@ function connectAbility: void { ...@@ -495,7 +495,7 @@ function connectAbility: void {
You can send a class from one process to another through IPC interfaces. However, you must ensure that the peer can use the code of this class and this class supports the **marshalling** and **unmarshalling** methods. OpenHarmony uses **marshalling** and **unmarshalling** to serialize and deserialize objects into objects that can be identified by each process. You can send a class from one process to another through IPC interfaces. However, you must ensure that the peer can use the code of this class and this class supports the **marshalling** and **unmarshalling** methods. OpenHarmony uses **marshalling** and **unmarshalling** to serialize and deserialize objects into objects that can be identified by each process.
To create a class that supports the sequenceable type, perform the following operations: **To create a class that supports the sequenceable type, perform the following operations:**
1. Implement the **marshalling** method, which obtains the current state of the object and serializes the object into a **Parcel** object. 1. Implement the **marshalling** method, which obtains the current state of the object and serializes the object into a **Parcel** object.
2. Implement the **unmarshalling** method, which deserializes the object from a **Parcel** object. 2. Implement the **unmarshalling** method, which deserializes the object from a **Parcel** object.
...@@ -595,7 +595,7 @@ export default class IdlTestServiceProxy implements IIdlTestService { ...@@ -595,7 +595,7 @@ export default class IdlTestServiceProxy implements IIdlTestService {
let _reply = new rpc.MessageParcel(); let _reply = new rpc.MessageParcel();
_data.writeInt(data); _data.writeInt(data);
this.proxy.sendRequest(IdlTestServiceProxy.COMMAND_TEST_INT_TRANSACTION, _data, _reply, _option).then(function(result) { this.proxy.sendRequest(IdlTestServiceProxy.COMMAND_TEST_INT_TRANSACTION, _data, _reply, _option).then(function(result) {
if (result.errCode === 0) { if (result.errCode == 0) {
let _errCode = result.reply.readInt(); let _errCode = result.reply.readInt();
if (_errCode != 0) { if (_errCode != 0) {
let _returnValue = undefined; let _returnValue = undefined;
...@@ -605,7 +605,7 @@ export default class IdlTestServiceProxy implements IIdlTestService { ...@@ -605,7 +605,7 @@ export default class IdlTestServiceProxy implements IIdlTestService {
let _returnValue = result.reply.readInt(); let _returnValue = result.reply.readInt();
callback(_errCode, _returnValue); callback(_errCode, _returnValue);
} else { } else {
console.log("sendRequest failed, errCode: " + result.errCode); console.log('sendRequest failed, errCode: ' + result.errCode);
} }
}) })
} }
...@@ -617,11 +617,11 @@ export default class IdlTestServiceProxy implements IIdlTestService { ...@@ -617,11 +617,11 @@ export default class IdlTestServiceProxy implements IIdlTestService {
let _reply = new rpc.MessageParcel(); let _reply = new rpc.MessageParcel();
_data.writeString(data); _data.writeString(data);
this.proxy.sendRequest(IdlTestServiceProxy.COMMAND_TEST_STRING_TRANSACTION, _data, _reply, _option).then(function(result) { this.proxy.sendRequest(IdlTestServiceProxy.COMMAND_TEST_STRING_TRANSACTION, _data, _reply, _option).then(function(result) {
if (result.errCode === 0) { if (result.errCode == 0) {
let _errCode = result.reply.readInt(); let _errCode = result.reply.readInt();
callback(_errCode); callback(_errCode);
} else { } else {
console.log("sendRequest failed, errCode: " + result.errCode); console.log('sendRequest failed, errCode: ' + result.errCode);
} }
}) })
} }
...@@ -644,12 +644,12 @@ import nativeMgr from 'nativeManager'; ...@@ -644,12 +644,12 @@ import nativeMgr from 'nativeManager';
function testIntTransactionCallback(errCode: number, returnValue: number) function testIntTransactionCallback(errCode: number, returnValue: number)
{ {
console.log("errCode: " + errCode + " returnValue: " + returnValue); console.log('errCode: ' + errCode + ' returnValue: ' + returnValue);
} }
function testStringTransactionCallback(errCode: number) function testStringTransactionCallback(errCode: number)
{ {
console.log("errCode: " + errCode); console.log('errCode: ' + errCode);
} }
function jsProxyTriggerCppStub() function jsProxyTriggerCppStub()
...@@ -660,6 +660,6 @@ function jsProxyTriggerCppStub() ...@@ -660,6 +660,6 @@ function jsProxyTriggerCppStub()
tsProxy.testIntTransaction(10, testIntTransactionCallback); tsProxy.testIntTransaction(10, testIntTransactionCallback);
// Call testStringTransaction. // Call testStringTransaction.
tsProxy.testStringTransaction("test", testIntTransactionCallback); tsProxy.testStringTransaction('test', testIntTransactionCallback);
} }
``` ```
...@@ -8,14 +8,13 @@ ...@@ -8,14 +8,13 @@
- Quick Start - Quick Start
- Getting Started - Getting Started
- [Preparations](quick-start/start-overview.md) - [Preparations](quick-start/start-overview.md)
- [Getting Started with eTS in Stage Model](quick-start/start-with-ets-stage.md) - [Getting Started with ArkTS in Stage Model](quick-start/start-with-ets-stage.md)
- [Getting Started with eTS in FA Model](quick-start/start-with-ets-fa.md) - [Getting Started with ArkTS in FA Model](quick-start/start-with-ets-fa.md)
- [Getting Started with JavaScript in FA Model](quick-start/start-with-js-fa.md) - [Getting Started with JavaScript in FA Model](quick-start/start-with-js-fa.md)
- Development Fundamentals - Development Fundamentals
- [Application Package Structure Configuration File (FA Model)](quick-start/package-structure.md) - [Application Package Structure Configuration File (FA Model)](quick-start/package-structure.md)
- [Application Package Structure Configuration File (Stage Model)](quick-start/stage-structure.md) - [Application Package Structure Configuration File (Stage Model)](quick-start/stage-structure.md)
- [SysCap](quick-start/syscap.md) - [SysCap](quick-start/syscap.md)
- [HarmonyAppProvision Configuration File](quick-start/app-provision-structure.md)
- Development - Development
- [Ability Development](ability/Readme-EN.md) - [Ability Development](ability/Readme-EN.md)
......
# Ability Development # Ability Development
- [Ability Framework Overview](ability-brief.md) - [Ability Framework Overview](ability-brief.md)
- [Context Usage](context-userguide.md) - [Context Usage](context-userguide.md)
- FA Model - FA Model
...@@ -19,5 +20,3 @@ ...@@ -19,5 +20,3 @@
- [Ability Assistant Usage](ability-assistant-guidelines.md) - [Ability Assistant Usage](ability-assistant-guidelines.md)
- [ContinuationManager Development](continuationmanager.md) - [ContinuationManager Development](continuationmanager.md)
- [Test Framework Usage](ability-delegator.md) - [Test Framework Usage](ability-delegator.md)
...@@ -96,13 +96,13 @@ Obtain the context by calling **context.getApplicationContext()** in **Ability** ...@@ -96,13 +96,13 @@ Obtain the context by calling **context.getApplicationContext()** in **Ability**
**Example** **Example**
```javascript ```javascript
import AbilityStage from "@ohos.application.AbilityStage"; import Ability from "@ohos.application.Ability";
var lifecycleid; var lifecycleid;
export default class MyAbilityStage extends AbilityStage { export default class MainAbility extends Ability {
onCreate() { onCreate() {
console.log("MyAbilityStage onCreate") console.log("MainAbility onCreate")
let AbilityLifecycleCallback = { let AbilityLifecycleCallback = {
onAbilityCreate(ability){ onAbilityCreate(ability){
console.log("AbilityLifecycleCallback onAbilityCreate ability:" + JSON.stringify(ability)); console.log("AbilityLifecycleCallback onAbilityCreate ability:" + JSON.stringify(ability));
...@@ -141,7 +141,7 @@ export default class MyAbilityStage extends AbilityStage { ...@@ -141,7 +141,7 @@ export default class MyAbilityStage extends AbilityStage {
// 2. Use applicationContext to register and listen for the ability lifecycle in the application. // 2. Use applicationContext to register and listen for the ability lifecycle in the application.
lifecycleid = applicationContext.registerAbilityLifecycleCallback(AbilityLifecycleCallback); lifecycleid = applicationContext.registerAbilityLifecycleCallback(AbilityLifecycleCallback);
console.log("registerAbilityLifecycleCallback number: " + JSON.stringify(lifecycleid)); console.log("registerAbilityLifecycleCallback number: " + JSON.stringify(lifecycleid));
} },
onDestroy() { onDestroy() {
let applicationContext = this.context.getApplicationContext(); let applicationContext = this.context.getApplicationContext();
applicationContext.unregisterAbilityLifecycleCallback(lifecycleid, (error, data) => { applicationContext.unregisterAbilityLifecycleCallback(lifecycleid, (error, data) => {
...@@ -211,7 +211,13 @@ export default class MainAbility extends Ability { ...@@ -211,7 +211,13 @@ export default class MainAbility extends Ability {
let context = this.context; let context = this.context;
console.log("[Demo] MainAbility bundleName " + context.abilityInfo.bundleName) console.log("[Demo] MainAbility bundleName " + context.abilityInfo.bundleName)
windowStage.setUIContent(this.context, "pages/index", null) windowStage.loadContent("pages/index", (err, data) => {
if (err.code) {
console.error('Failed to load the content. Cause:' + JSON.stringify(err));
return;
}
console.info('Succeeded in loading the content. Data: ' + JSON.stringify(data))
});
} }
onWindowStageDestroy() { onWindowStageDestroy() {
...@@ -237,7 +243,7 @@ For details, see [FormExtensionContext](../reference/apis/js-apis-formextensionc ...@@ -237,7 +243,7 @@ For details, see [FormExtensionContext](../reference/apis/js-apis-formextensionc
### Obtaining the Context on an ArkTS Page ### Obtaining the Context on an ArkTS Page
In the stage model, in the `onWindowStageCreate` lifecycle of an ability, you can call `SetUIContent` of `WindowStage` to load an ArkTS page. In some scenarios, you need to obtain the context on the page to call related APIs. In the stage model, in the onWindowStageCreate lifecycle of an ability, you can call **SetUIContent** of **WindowStage** to load an ArkTS page. In some scenarios, you need to obtain the context on the page to call related APIs.
**How to Obtain** **How to Obtain**
...@@ -245,7 +251,7 @@ Use the API described in the table below to obtain the context associated with a ...@@ -245,7 +251,7 @@ Use the API described in the table below to obtain the context associated with a
| API | Description | | API | Description |
| :------------------------------------ | :--------------------------- | | :------------------------------------ | :--------------------------- |
| getContext(component: Object): Object | Obtains the `Context` object associated with a component on the page.| | getContext(component: Object): Object | Obtains the **Context** object associated with a component on the page.|
**Example** **Example**
......
...@@ -14,20 +14,20 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -14,20 +14,20 @@ As the entry of the ability continuation capability, **continuationManager** is
## Available APIs ## Available APIs
| API | Description| | API | Description|
| ---------------------------------------------------------------------------------------------- | ----------- | | ---------------------------------------------------------------------------------------------- | ----------- |
| register(callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API does not involve any filter parameters and uses an asynchronous callback to return the result.| | registerContinuation(callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API does not involve any filter parameters and uses an asynchronous callback to return the result.|
| register(options: ContinuationExtraParams, callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API uses an asynchronous callback to return the result.| | registerContinuation(options: ContinuationExtraParams, callback: AsyncCallback\<number>): void | Registers the continuation management service and obtains a token. This API uses an asynchronous callback to return the result.|
| register(options?: ContinuationExtraParams): Promise\<number> | Registers the continuation management service and obtains a token. This API uses a promise to return the result.| | registerContinuation(options?: ContinuationExtraParams): Promise\<number> | Registers the continuation management service and obtains a token. This API uses a promise to return the result.|
| on(type: "deviceConnect", token: number, callback: Callback\<Array\<ContinuationResult>>): void | Subscribes to device connection events. This API uses an asynchronous callback to return the result.| | on(type: "deviceSelected", token: number, callback: Callback\<Array\<ContinuationResult>>): void | Subscribes to device connection events. This API uses an asynchronous callback to return the result.|
| on(type: "deviceDisconnect", token: number, callback: Callback\<Array\<string>>): void | Subscribes to device disconnection events. This API uses an asynchronous callback to return the result.| | on(type: "deviceUnselected", token: number, callback: Callback\<Array\<ContinuationResult>>): void | Subscribes to device disconnection events. This API uses an asynchronous callback to return the result.|
| off(type: "deviceConnect", token: number): void | Unsubscribes from device connection events.| | off(type: "deviceSelected", token: number): void | Unsubscribes from device connection events.|
| off(type: "deviceDisconnect", token: number): void | Unsubscribes from device disconnection events.| | off(type: "deviceUnselected", token: number): void | Unsubscribes from device disconnection events.|
| startDeviceManager(token: number, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API does not involve any filter parameters and uses an asynchronous callback to return the result.| | startContinuationDeviceManager(token: number, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API does not involve any filter parameters and uses an asynchronous callback to return the result.|
| startDeviceManager(token: number, options: ContinuationExtraParams, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API uses an asynchronous callback to return the result.| | startContinuationDeviceManager(token: number, options: ContinuationExtraParams, callback: AsyncCallback\<void>): void | Starts the device selection module to show the list of available devices. This API uses an asynchronous callback to return the result.|
| startDeviceManager(token: number, options?: ContinuationExtraParams): Promise\<void> | Starts the device selection module to show the list of available devices. This API uses a promise to return the result.| | startContinuationDeviceManager(token: number, options?: ContinuationExtraParams): Promise\<void> | Starts the device selection module to show the list of available devices. This API uses a promise to return the result.|
| updateConnectStatus(token: number, deviceId: string, status: DeviceConnectState, callback: AsyncCallback\<void>): void | Instructs the device selection module to update the device connection state. This API uses an asynchronous callback to return the result.| | updateContinuationState(token: number, deviceId: string, status: DeviceConnectState, callback: AsyncCallback\<void>): void | Instructs the device selection module to update the device connection state. This API uses an asynchronous callback to return the result.|
| updateConnectStatus(token: number, deviceId: string, status: DeviceConnectState): Promise\<void> | Instructs the device selection module to update the device connection state. This API uses a promise to return the result.| | updateContinuationState(token: number, deviceId: string, status: DeviceConnectState): Promise\<void> | Instructs the device selection module to update the device connection state. This API uses a promise to return the result.|
| unregister(token: number, callback: AsyncCallback\<void>): void | Deregisters the continuation management service. This API uses an asynchronous callback to return the result.| | unregisterContinuation(token: number, callback: AsyncCallback\<void>): void | Deregisters the continuation management service. This API uses an asynchronous callback to return the result.|
| unregister(token: number): Promise\<void> | Deregisters the continuation management service. This API uses a promise to return the result.| | unregisterContinuation(token: number): Promise\<void> | Deregisters the continuation management service. This API uses a promise to return the result.|
## How to Develop ## How to Develop
1. Import the **continuationManager** module. 1. Import the **continuationManager** module.
...@@ -36,7 +36,7 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -36,7 +36,7 @@ As the entry of the ability continuation capability, **continuationManager** is
import continuationManager from '@ohos.continuation.continuationManager'; import continuationManager from '@ohos.continuation.continuationManager';
``` ```
2. Apply for permissions required for cross-device continuation or collaboration operations. 2. Apply for the **DISTRIBUTED_DATASYNC** permission.
The permission application operation varies according to the ability model in use. In the FA mode, add the required permission in the `config.json` file, as follows: The permission application operation varies according to the ability model in use. In the FA mode, add the required permission in the `config.json` file, as follows:
...@@ -57,6 +57,7 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -57,6 +57,7 @@ As the entry of the ability continuation capability, **continuationManager** is
```ts ```ts
import abilityAccessCtrl from "@ohos.abilityAccessCtrl"; import abilityAccessCtrl from "@ohos.abilityAccessCtrl";
import bundle from '@ohos.bundle'; import bundle from '@ohos.bundle';
import featureAbility from '@ohos.ability.featureAbility';
async function requestPermission() { async function requestPermission() {
let permissions: Array<string> = [ let permissions: Array<string> = [
...@@ -124,7 +125,8 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -124,7 +125,8 @@ As the entry of the ability continuation capability, **continuationManager** is
// If the permission is not granted, call requestPermissionsFromUser to apply for the permission. // If the permission is not granted, call requestPermissionsFromUser to apply for the permission.
if (needGrantPermission) { if (needGrantPermission) {
try { try {
await globalThis.abilityContext.requestPermissionsFromUser(permissions); // globalThis.context is Ability.context, which must be assigned a value in the MainAbility.ts file in advance.
await globalThis.context.requestPermissionsFromUser(permissions);
} catch (err) { } catch (err) {
console.error('app permission request permissions error' + JSON.stringify(err)); console.error('app permission request permissions error' + JSON.stringify(err));
} }
...@@ -140,13 +142,16 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -140,13 +142,16 @@ As the entry of the ability continuation capability, **continuationManager** is
```ts ```ts
let token: number = -1; // Used to save the token returned after the registration. The token will be used when listening for device connection/disconnection events, starting the device selection module, and updating the device connection state. let token: number = -1; // Used to save the token returned after the registration. The token will be used when listening for device connection/disconnection events, starting the device selection module, and updating the device connection state.
try {
continuationManager.register().then((data) => { continuationManager.registerContinuation().then((data) => {
console.info('register finished, ' + JSON.stringify(data)); console.info('registerContinuation finished, ' + JSON.stringify(data));
token = data; // Obtain a token and assign a value to the token variable. token = data; // Obtain a token and assign a value to the token variable.
}).catch((err) => { }).catch((err) => {
console.error('register failed, cause: ' + JSON.stringify(err)); console.error('registerContinuation failed, cause: ' + JSON.stringify(err));
}); });
} catch (err) {
console.error('registerContinuation failed, cause: ' + JSON.stringify(err));
}
``` ```
4. Listen for the device connection/disconnection state. 4. Listen for the device connection/disconnection state.
...@@ -156,9 +161,10 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -156,9 +161,10 @@ As the entry of the ability continuation capability, **continuationManager** is
```ts ```ts
let remoteDeviceId: string = ""; // Used to save the information about the remote device selected by the user, which will be used for cross-device continuation or collaboration. let remoteDeviceId: string = ""; // Used to save the information about the remote device selected by the user, which will be used for cross-device continuation or collaboration.
try {
// The token parameter is the token obtained during the registration. // The token parameter is the token obtained during the registration.
continuationManager.on("deviceConnect", token, (continuationResults) => { continuationManager.on("deviceSelected", token, (continuationResults) => {
console.info('registerDeviceConnectCallback len: ' + continuationResults.length); console.info('registerDeviceSelectedCallback len: ' + continuationResults.length);
if (continuationResults.length <= 0) { if (continuationResults.length <= 0) {
console.info('no selected device'); console.info('no selected device');
return; return;
...@@ -171,13 +177,15 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -171,13 +177,15 @@ As the entry of the ability continuation capability, **continuationManager** is
bundleName: 'ohos.samples.continuationmanager', bundleName: 'ohos.samples.continuationmanager',
abilityName: 'MainAbility' abilityName: 'MainAbility'
}; };
// To initiate multi-device collaboration, you must obtain the ohos.permission.DISTRIBUTED_DATASYNC permission.
globalThis.abilityContext.startAbility(want).then((data) => { globalThis.abilityContext.startAbility(want).then((data) => {
console.info('StartRemoteAbility finished, ' + JSON.stringify(data)); console.info('StartRemoteAbility finished, ' + JSON.stringify(data));
}).catch((err) => { }).catch((err) => {
console.error('StartRemoteAbility failed, cause: ' + JSON.stringify(err)); console.error('StartRemoteAbility failed, cause: ' + JSON.stringify(err));
}); });
}); });
} catch (err) {
console.error('on failed, cause: ' + JSON.stringify(err));
}
``` ```
The preceding multi-device collaboration operation is performed across devices in the stage model. For details about this operation in the FA model, see [Page Ability Development](https://gitee.com/openharmony/docs/blob/master/en/application-dev/ability/fa-pageability.md). The preceding multi-device collaboration operation is performed across devices in the stage model. For details about this operation in the FA model, see [Page Ability Development](https://gitee.com/openharmony/docs/blob/master/en/application-dev/ability/fa-pageability.md).
...@@ -189,35 +197,43 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -189,35 +197,43 @@ As the entry of the ability continuation capability, **continuationManager** is
let deviceConnectStatus: continuationManager.DeviceConnectState = continuationManager.DeviceConnectState.CONNECTED; let deviceConnectStatus: continuationManager.DeviceConnectState = continuationManager.DeviceConnectState.CONNECTED;
// The token parameter is the token obtained during the registration, and the remoteDeviceId parameter is the remoteDeviceId obtained. // The token parameter is the token obtained during the registration, and the remoteDeviceId parameter is the remoteDeviceId obtained.
continuationManager.updateConnectStatus(token, remoteDeviceId, deviceConnectStatus).then((data) => { try {
console.info('updateConnectStatus finished, ' + JSON.stringify(data)); continuationManager.updateContinuationState(token, remoteDeviceId, deviceConnectStatus).then((data) => {
console.info('updateContinuationState finished, ' + JSON.stringify(data));
}).catch((err) => { }).catch((err) => {
console.error('updateConnectStatus failed, cause: ' + JSON.stringify(err)); console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
}); });
} catch (err) {
console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
}
``` ```
Listen for the device disconnection state so that the user can stop cross-device continuation or collaboration in time. The sample code is as follows: Listen for the device disconnection state so that the user can stop cross-device continuation or collaboration in time. The sample code is as follows:
```ts ```ts
try {
// The token parameter is the token obtained during the registration. // The token parameter is the token obtained during the registration.
continuationManager.on("deviceDisconnect", token, (deviceIds) => { continuationManager.on("deviceUnselected", token, (continuationResults) => {
console.info('onDeviceDisconnect len: ' + deviceIds.length); console.info('onDeviceUnselected len: ' + continuationResults.length);
if (deviceIds.length <= 0) { if (continuationResults.length <= 0) {
console.info('no unselected device'); console.info('no unselected device');
return; return;
} }
// Update the device connection state. // Update the device connection state.
let unselectedDeviceId: string = deviceIds[0]; // Assign the deviceId of the first deselected remote device to the unselectedDeviceId variable. let unselectedDeviceId: string = continuationResults[0].id; // Assign the deviceId of the first deselected remote device to the unselectedDeviceId variable.
let deviceConnectStatus: continuationManager.DeviceConnectState = continuationManager.DeviceConnectState.DISCONNECTING; // Device disconnected. let deviceConnectStatus: continuationManager.DeviceConnectState = continuationManager.DeviceConnectState.DISCONNECTING; // Device disconnected.
// The token parameter is the token obtained during the registration, and the unselectedDeviceId parameter is the unselectedDeviceId obtained. // The token parameter is the token obtained during the registration, and the unselectedDeviceId parameter is the unselectedDeviceId obtained.
continuationManager.updateConnectStatus(token, unselectedDeviceId, deviceConnectStatus).then((data) => { continuationManager.updateContinuationState(token, unselectedDeviceId, deviceConnectStatus).then((data) => {
console.info('updateConnectStatus finished, ' + JSON.stringify(data)); console.info('updateContinuationState finished, ' + JSON.stringify(data));
}).catch((err) => { }).catch((err) => {
console.error('updateConnectStatus failed, cause: ' + JSON.stringify(err)); console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
}); });
}); });
} catch (err) {
console.error('updateContinuationState failed, cause: ' + JSON.stringify(err));
}
``` ```
5. Start the device selection module to show the list of available devices on the network. 5. Start the device selection module to show the list of available devices on the network.
...@@ -231,12 +247,16 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -231,12 +247,16 @@ As the entry of the ability continuation capability, **continuationManager** is
continuationMode: continuationManager.ContinuationMode.COLLABORATION_SINGLE // Single-choice mode of the device selection module. continuationMode: continuationManager.ContinuationMode.COLLABORATION_SINGLE // Single-choice mode of the device selection module.
}; };
try {
// The token parameter is the token obtained during the registration. // The token parameter is the token obtained during the registration.
continuationManager.startDeviceManager(token, continuationExtraParams).then((data) => { continuationManager.startContinuationDeviceManager(token, continuationExtraParams).then((data) => {
console.info('startDeviceManager finished, ' + JSON.stringify(data)); console.info('startContinuationDeviceManager finished, ' + JSON.stringify(data));
}).catch((err) => { }).catch((err) => {
console.error('startDeviceManager failed, cause: ' + JSON.stringify(err)); console.error('startContinuationDeviceManager failed, cause: ' + JSON.stringify(err));
}); });
} catch (err) {
console.error('startContinuationDeviceManager failed, cause: ' + JSON.stringify(err));
}
``` ```
6. If you do not need to perform cross-device migration or collaboration operations, you can deregister the continuation management service, by passing the token obtained during the registration. 6. If you do not need to perform cross-device migration or collaboration operations, you can deregister the continuation management service, by passing the token obtained during the registration.
...@@ -244,10 +264,14 @@ As the entry of the ability continuation capability, **continuationManager** is ...@@ -244,10 +264,14 @@ As the entry of the ability continuation capability, **continuationManager** is
The sample code is as follows: The sample code is as follows:
```ts ```ts
try {
// The token parameter is the token obtained during the registration. // The token parameter is the token obtained during the registration.
continuationManager.unregister(token).then((data) => { continuationManager.unregisterContinuation(token).then((data) => {
console.info('unregister finished, ' + JSON.stringify(data)); console.info('unregisterContinuation finished, ' + JSON.stringify(data));
}).catch((err) => { }).catch((err) => {
console.error('unregister failed, cause: ' + JSON.stringify(err)); console.error('unregisterContinuation failed, cause: ' + JSON.stringify(err));
}); });
} catch (err) {
console.error('unregisterContinuation failed, cause: ' + JSON.stringify(err));
}
``` ```
...@@ -32,19 +32,19 @@ Example URIs: ...@@ -32,19 +32,19 @@ Example URIs:
**Table 1** Data ability lifecycle APIs **Table 1** Data ability lifecycle APIs
|API|Description| |API|Description|
|:------|:------| |:------|:------|
|onInitialized?(info: AbilityInfo): void|Called during ability initialization to initialize the relational database (RDB).| |onInitialized(info: AbilityInfo): void|Called during ability initialization to initialize the relational database (RDB).|
|update?(uri: string, valueBucket: rdb.ValuesBucket, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<number>): void|Updates data in the database.| |update(uri: string, valueBucket: rdb.ValuesBucket, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<number>): void|Updates data in the database.|
|query?(uri: string, columns: Array\<string>, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<ResultSet>): void|Queries data in the database.| |query(uri: string, columns: Array\<string>, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<ResultSet>): void|Queries data in the database.|
|delete?(uri: string, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<number>): void|Deletes one or more data records from the database.| |delete(uri: string, predicates: dataAbility.DataAbilityPredicates, callback: AsyncCallback\<number>): void|Deletes one or more data records from the database.|
|normalizeUri?(uri: string, callback: AsyncCallback\<string>): void|Normalizes the URI. A normalized URI applies to cross-device use, persistence, backup, and restore. When the context changes, it ensures that the same data item can be referenced.| |normalizeUri(uri: string, callback: AsyncCallback\<string>): void|Normalizes the URI. A normalized URI applies to cross-device use, persistence, backup, and restore. When the context changes, it ensures that the same data item can be referenced.|
|batchInsert?(uri: string, valueBuckets: Array\<rdb.ValuesBucket>, callback: AsyncCallback\<number>): void|Inserts multiple data records into the database.| |batchInsert(uri: string, valueBuckets: Array\<rdb.ValuesBucket>, callback: AsyncCallback\<number>): void|Inserts multiple data records into the database.|
|denormalizeUri?(uri: string, callback: AsyncCallback\<string>): void|Converts a normalized URI generated by **normalizeUri** into a denormalized URI.| |denormalizeUri(uri: string, callback: AsyncCallback\<string>): void|Converts a normalized URI generated by **normalizeUri** into a denormalized URI.|
|insert?(uri: string, valueBucket: rdb.ValuesBucket, callback: AsyncCallback\<number>): void|Inserts a data record into the database.| |insert(uri: string, valueBucket: rdb.ValuesBucket, callback: AsyncCallback\<number>): void|Inserts a data record into the database.|
|openFile?(uri: string, mode: string, callback: AsyncCallback\<number>): void|Opens a file.| |openFile(uri: string, mode: string, callback: AsyncCallback\<number>): void|Opens a file.|
|getFileTypes?(uri: string, mimeTypeFilter: string, callback: AsyncCallback\<Array\<string>>): void|Obtains the MIME type of a file.| |getFileTypes(uri: string, mimeTypeFilter: string, callback: AsyncCallback\<Array\<string>>): void|Obtains the MIME type of a file.|
|getType?(uri: string, callback: AsyncCallback\<string>): void|Obtains the MIME type matching the data specified by the URI.| |getType(uri: string, callback: AsyncCallback\<string>): void|Obtains the MIME type matching the data specified by the URI.|
|executeBatch?(ops: Array\<DataAbilityOperation>, callback: AsyncCallback\<Array\<DataAbilityResult>>): void|Operates data in the database in batches.| |executeBatch(ops: Array\<DataAbilityOperation>, callback: AsyncCallback\<Array\<DataAbilityResult>>): void|Operates data in the database in batches.|
|call?(method: string, arg: string, extras: PacMap, callback: AsyncCallback\<PacMap>): void|Calls a custom API.| |call(method: string, arg: string, extras: PacMap, callback: AsyncCallback\<PacMap>): void|Calls a custom API.|
## How to Develop ## How to Develop
...@@ -55,6 +55,7 @@ Example URIs: ...@@ -55,6 +55,7 @@ Example URIs:
The following code snippet shows how to create a Data ability: The following code snippet shows how to create a Data ability:
```javascript ```javascript
import featureAbility from '@ohos.ability.featureAbility'
import dataAbility from '@ohos.data.dataAbility' import dataAbility from '@ohos.data.dataAbility'
import dataRdb from '@ohos.data.rdb' import dataRdb from '@ohos.data.rdb'
...@@ -66,7 +67,8 @@ Example URIs: ...@@ -66,7 +67,8 @@ Example URIs:
export default { export default {
onInitialized(abilityInfo) { onInitialized(abilityInfo) {
console.info('DataAbility onInitialized, abilityInfo:' + abilityInfo.bundleName) console.info('DataAbility onInitialized, abilityInfo:' + abilityInfo.bundleName)
dataRdb.getRdbStore(STORE_CONFIG, 1, (err, store) => { let context = featureAbility.getContext()
dataRdb.getRdbStore(context, STORE_CONFIG, 1, (err, store) => {
console.info('DataAbility getRdbStore callback') console.info('DataAbility getRdbStore callback')
store.executeSql(SQL_CREATE_TABLE, []) store.executeSql(SQL_CREATE_TABLE, [])
rdbStore = store rdbStore = store
......
# Page Ability Development # Page Ability Development
## Overview ## Overview
### Concepts ### Concepts
The Page ability implements the ArkUI and provides the capability of interacting with developers. When you create an ability in DevEco Studio, DevEco Studio automatically creates template code. The capabilities related to the Page ability are implemented through the **featureAbility**, and the lifecycle callbacks are implemented through the callbacks in **app.js** or **app.ets**.
The Page ability implements the ArkUI and provides the capability of interacting with developers. When you create an ability in DevEco Studio, DevEco Studio automatically creates template code.
The capabilities related to the Page ability are implemented through the **featureAbility**, and the lifecycle callbacks are implemented through the callbacks in **app.js** or **app.ets**.
### Page Ability Lifecycle ### Page Ability Lifecycle
**Ability lifecycle** Introduction to the Page ability lifecycle:
The Page ability lifecycle defines all states of a Page ability, such as **INACTIVE**, **ACTIVE**, and **BACKGROUND**. The Page ability lifecycle defines all states of a Page ability, such as **INACTIVE**, **ACTIVE**, and **BACKGROUND**.
...@@ -27,27 +31,30 @@ Description of ability lifecycle states: ...@@ -27,27 +31,30 @@ Description of ability lifecycle states:
- **BACKGROUND**: The Page ability runs in the background. After being re-activated, the Page ability enters the **ACTIVE** state. After being destroyed, the Page ability enters the **INITIAL** state. - **BACKGROUND**: The Page ability runs in the background. After being re-activated, the Page ability enters the **ACTIVE** state. After being destroyed, the Page ability enters the **INITIAL** state.
**The following figure shows the relationship between lifecycle callbacks and lifecycle states of the Page ability.** The following figure shows the relationship between lifecycle callbacks and lifecycle states of the Page ability.
![fa-pageAbility-lifecycle](figures/fa-pageAbility-lifecycle.png) ![fa-pageAbility-lifecycle](figures/fa-pageAbility-lifecycle.png)
You can override the lifecycle callbacks provided by the Page ability in the **app.js** or **app.ets** file. Currently, the **app.js** file provides only the **onCreate** and **onDestroy** callbacks, and the **app.ets** file provides the full lifecycle callbacks. You can override the lifecycle callbacks provided by the Page ability in the **app.js** or **app.ets** file. Currently, the **app.js** file provides only the **onCreate** and **onDestroy** callbacks, and the **app.ets** file provides the full lifecycle callbacks.
### Launch Type ### Launch Type
The ability supports two launch types: singleton and multi-instance. The ability supports two launch types: singleton and multi-instance.
You can specify the launch type by setting **launchType** in the **config.json** file. You can specify the launch type by setting **launchType** in the **config.json** file.
**Table 1** Introduction to startup mode **Table 1** Startup modes
| Launch Type | Description |Description | | Launch Type | Description |Description |
| ----------- | ------- |---------------- | | ----------- | ------- |---------------- |
| standard | Multi-instance | A new instance is started each time an ability starts.| | standard | Multi-instance | A new instance is started each time an ability starts.|
| singleton | Singleton | Only one instance exists in the system. If an instance already exists when an ability is started, that instance is reused.| | singleton | Singleton | The ability has only one instance in the system. If an instance already exists when an ability is started, that instance is reused.|
By default, **singleton** is used. By default, **singleton** is used.
## Development Guidelines ## Development Guidelines
### Available APIs ### Available APIs
**Table 2** APIs provided by featureAbility **Table 2** APIs provided by featureAbility
...@@ -73,8 +80,7 @@ By default, **singleton** is used. ...@@ -73,8 +80,7 @@ By default, **singleton** is used.
```javascript ```javascript
import featureAbility from '@ohos.ability.featureAbility' import featureAbility from '@ohos.ability.featureAbility'
featureAbility.startAbility({ featureAbility.startAbility({
want: want: {
{
action: "", action: "",
entities: [""], entities: [""],
type: "", type: "",
...@@ -83,13 +89,12 @@ By default, **singleton** is used. ...@@ -83,13 +89,12 @@ By default, **singleton** is used.
/* In the FA model, abilityName consists of package and ability name. */ /* In the FA model, abilityName consists of package and ability name. */
abilityName: "com.example.entry.secondAbility", abilityName: "com.example.entry.secondAbility",
uri: "" uri: ""
}, }
}, });
);
``` ```
### Starting a Remote Page Ability ### Starting a Remote Page Ability
>Note >NOTE
> >
>This feature applies only to system applications, since the **getTrustedDeviceListSync** API of the **DeviceManager** class is open only to system applications. >This feature applies only to system applications, since the **getTrustedDeviceListSync** API of the **DeviceManager** class is open only to system applications.
......
...@@ -10,7 +10,7 @@ IPC/RPC enables a proxy and a stub that run on different processes to communicat ...@@ -10,7 +10,7 @@ IPC/RPC enables a proxy and a stub that run on different processes to communicat
| Class/Interface | Function | Description | | Class/Interface | Function | Description |
| --------------- | -------- | ----------- | | --------------- | -------- | ----------- |
| IRemoteBroker | sptr<IRemoteObject> AsObject() | Obtains the holder of a remote proxy object. This method must be implemented by the derived classes of **IRemoteBroker**. If you call this method on the stub, the **RemoteObject** is returned; if you call this method on the proxy, the proxy object is returned. | | [IRemoteBroker](../reference/apis/js-apis-rpc.md#iremotebroker) | sptr<IRemoteObject> AsObject() | Obtains the holder of a remote proxy object. This method must be implemented by the derived classes of **IRemoteBroker**. If you call this method on the stub, the **RemoteObject** is returned; if you call this method on the proxy, the proxy object is returned. |
| IRemoteStub | virtual int OnRemoteRequest(uint32_t code, MessageParcel &data, MessageParcel &reply, MessageOption &option) | Called to process a request from the proxy and return the result. Derived classes need to override this method. | | IRemoteStub | virtual int OnRemoteRequest(uint32_t code, MessageParcel &data, MessageParcel &reply, MessageOption &option) | Called to process a request from the proxy and return the result. Derived classes need to override this method. |
| IRemoteProxy | | Service proxy classes are derived from the **IRemoteProxy** class. | | IRemoteProxy | | Service proxy classes are derived from the **IRemoteProxy** class. |
......
# Device Usage Statistics # Device Usage Statistics
- [Device Usage Statistics Overview](device-usage-statistics-overview.md) - [Device Usage Statistics Overview](device-usage-statistics-overview.md)
- [Device Usage Statistics Development](device-usage-statistics-dev-guide.md) - [Device Usage Statistics Development](device-usage-statistics-use-guide.md)
...@@ -66,25 +66,7 @@ To learn more about the APIs for obtaining device location information, see [Geo ...@@ -66,25 +66,7 @@ To learn more about the APIs for obtaining device location information, see [Geo
If your application needs to access the device location information when running on the background, it must be configured to be able to run on the background and be granted the **ohos.permission.LOCATION_IN_BACKGROUND** permission. In this way, the system continues to report device location information after your application moves to the background. If your application needs to access the device location information when running on the background, it must be configured to be able to run on the background and be granted the **ohos.permission.LOCATION_IN_BACKGROUND** permission. In this way, the system continues to report device location information after your application moves to the background.
To allow your application to access device location information, declare the required permissions in the **module.json** file of your application. The sample code is as follows: You can declare the required permission in your application's configuration file. For details, see [Access Control (Permission) Development](../security/accesstoken-guidelines.md).
```
{
"module": {
"reqPermissions": [
"name": "ohos.permission.LOCATION",
"reason": "$string:reason_description",
"usedScene": {
"ability": ["com.myapplication.LocationAbility"],
"when": "inuse"
}
]
}
}
```
For details about these fields, see [Application Package Structure Configuration File](../quick-start/stage-structure.md).
2. Import the **geolocation** module by which you can implement all APIs related to the basic location capabilities. 2. Import the **geolocation** module by which you can implement all APIs related to the basic location capabilities.
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
- [Video Playback Development](video-playback.md) - [Video Playback Development](video-playback.md)
- [Video Recording Development](video-recorder.md) - [Video Recording Development](video-recorder.md)
- Image - Image
- [Image Development](image.md) - [Image Development](image.md)
......
# Audio Capture Development # Audio Capture Development
## When to Use ## Introduction
You can use the APIs provided by **AudioCapturer** to record raw audio files. You can use the APIs provided by **AudioCapturer** to record raw audio files, thereby implementing audio data collection.
### State Check **Status check**: During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioCapturer** instance. This is because some operations can be performed only when the audio capturer is in a given state. If the application performs an operation when the audio capturer is not in the given state, the system may throw an exception or generate other undefined behavior.
During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioCapturer** instance. This is because some operations can be performed only when the audio capturer is in a given state. If the application performs an operation when the audio capturer is not in the given state, the system may throw an exception or generate other undefined behavior. ## Working Principles
For details about the APIs, see [AudioCapturer in Audio Management](../reference/apis/js-apis-audio.md#audiocapturer8). This following figure shows the audio capturer state transitions.
**Figure 1** Audio capturer state transitions
![audio-capturer-state](figures/audio-capturer-state.png)
- **PREPARED**: The audio capturer enters this state by calling **create()**.
- **RUNNING**: The audio capturer enters this state by calling **start()** when it is in the **PREPARED** state or by calling **start()** when it is in the **STOPPED** state.
- **STOPPED**: The audio capturer in the **RUNNING** state can call **stop()** to stop playing audio data.
- **RELEASED**: The audio capturer in the **PREPARED** or **STOPPED** state can use **release()** to release all occupied hardware and software resources. It will not transit to any other state after it enters the **RELEASED** state.
**Figure 1** Audio capturer state ## Constraints
![](figures/audio-capturer-state.png) Before developing the audio data collection feature, configure the **ohos.permission.MICROPHONE** permission for your application. For details about permission configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop ## How to Develop
For details about the APIs, see [AudioCapturer in Audio Management](../reference/apis/js-apis-audio.md#audiocapturer8).
1. Use **createAudioCapturer()** to create an **AudioCapturer** instance. 1. Use **createAudioCapturer()** to create an **AudioCapturer** instance.
Set parameters of the **AudioCapturer** instance in **audioCapturerOptions**. This instance is used to capture audio, control and obtain the recording status, and register a callback for notification. Set parameters of the **AudioCapturer** instance in **audioCapturerOptions**. This instance is used to capture audio, control and obtain the recording state, and register a callback for notification.
```js ```js
var audioStreamInfo = { import audio from '@ohos.multimedia.audio';
let audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100, samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1, channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
} }
var audioCapturerInfo = { let audioCapturerInfo = {
source: audio.SourceType.SOURCE_TYPE_MIC, source: audio.SourceType.SOURCE_TYPE_MIC,
capturerFlags: 1 capturerFlags: 0 // 0 is the extended flag bit of the audio capturer. The default value is 0.
} }
var audioCapturerOptions = { let audioCapturerOptions = {
streamInfo: audioStreamInfo, streamInfo: audioStreamInfo,
capturerInfo: audioCapturerInfo capturerInfo: audioCapturerInfo
} }
let audioCapturer = await audio.createAudioCapturer(audioCapturerOptions); let audioCapturer = await audio.createAudioCapturer(audioCapturerOptions);
var state = audioRenderer.state; console.log('AudioRecLog: Create audio capturer success.');
```
2. (Optional) Use **on('stateChange')** to subscribe to audio renderer state changes.
If an application needs to perform some operations when the audio renderer state is updated, the application can subscribe to the state changes. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
```js
audioCapturer.on('stateChange',(state) => {
console.info('AudioCapturerLog: Changed State to : ' + state)
switch (state) {
case audio.AudioState.STATE_PREPARED:
console.info('--------CHANGE IN AUDIO STATE----------PREPARED--------------');
console.info('Audio State is : Prepared');
break;
case audio.AudioState.STATE_RUNNING:
console.info('--------CHANGE IN AUDIO STATE----------RUNNING--------------');
console.info('Audio State is : Running');
break;
case audio.AudioState.STATE_STOPPED:
console.info('--------CHANGE IN AUDIO STATE----------STOPPED--------------');
console.info('Audio State is : stopped');
break;
case audio.AudioState.STATE_RELEASED:
console.info('--------CHANGE IN AUDIO STATE----------RELEASED--------------');
console.info('Audio State is : released');
break;
default:
console.info('--------CHANGE IN AUDIO STATE----------INVALID--------------');
console.info('Audio State is : invalid');
break;
}
});
``` ```
3. Use **start()** to start audio recording. 2. Use **start()** to start audio recording.
The capturer state will be **STATE_RUNNING** once the audio capturer is started. The application can then begin reading buffers. The capturer state will be **STATE_RUNNING** once the audio capturer is started. The application can then begin reading buffers.
```js ```js
import audio from '@ohos.multimedia.audio';
async function startCapturer() {
let state = audioCapturer.state;
// The audio capturer should be in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state after being started.
if (state != audio.AudioState.STATE_PREPARED || state != audio.AudioState.STATE_PAUSED ||
state != audio.AudioState.STATE_STOPPED) {
console.info('Capturer is not in a correct state to start');
return;
}
await audioCapturer.start(); await audioCapturer.start();
if (audioCapturer.state == audio.AudioState.STATE_RUNNING) {
let state = audioCapturer.state;
if (state == audio.AudioState.STATE_RUNNING) {
console.info('AudioRecLog: Capturer started'); console.info('AudioRecLog: Capturer started');
} else { } else {
console.info('AudioRecLog: Capturer start failed'); console.error('AudioRecLog: Capturer start failed');
}
} }
``` ```
4. Use **getBufferSize()** to obtain the minimum buffer size to read. 3. Read the captured audio data and convert it to a byte stream. Call **read()** repeatedly to read the data until the application stops the recording.
```js
var bufferSize = await audioCapturer.getBufferSize();
console.info('AudioRecLog: buffer size: ' + bufferSize);
```
5. Read the captured audio data and convert it to a byte stream. Call **read()** repeatedly to read the data until the application wants to stop the recording.
The following example shows how to write recorded data into a file. The following example shows how to write recorded data into a file.
```js ```js
import fileio from '@ohos.fileio'; import fileio from '@ohos.fileio';
const path = '/data/data/.pulse_dir/capture_js.wav'; let state = audioCapturer.state;
// The read operation can be performed only when the state is STATE_RUNNING.
if (state != audio.AudioState.STATE_RUNNING) {
console.info('Capturer is not in a correct state to read');
return;
}
const path = '/data/data/.pulse_dir/capture_js.wav'; // Path for storing the collected audio file.
let fd = fileio.openSync(path, 0o102, 0o777); let fd = fileio.openSync(path, 0o102, 0o777);
if (fd !== null) { if (fd !== null) {
console.info('AudioRecLog: file fd created'); console.info('AudioRecLog: file fd created');
...@@ -115,38 +110,140 @@ If an application needs to perform some operations when the audio renderer state ...@@ -115,38 +110,140 @@ If an application needs to perform some operations when the audio renderer state
console.info('AudioRecLog: file fd opened in append mode'); console.info('AudioRecLog: file fd opened in append mode');
} }
var numBuffersToCapture = 150; let numBuffersToCapture = 150; // Write data for 150 times.
while (numBuffersToCapture) { while (numBuffersToCapture) {
var buffer = await audioCapturer.read(bufferSize, true); let buffer = await audioCapturer.read(bufferSize, true);
if (typeof(buffer) == undefined) { if (typeof(buffer) == undefined) {
console.info('read buffer failed'); console.info('AudioRecLog: read buffer failed');
} else { } else {
var number = fileio.writeSync(fd, buffer); let number = fileio.writeSync(fd, buffer);
console.info('AudioRecLog: data written: ' + number); console.info(`AudioRecLog: data written: ${number}`);
} }
numBuffersToCapture--; numBuffersToCapture--;
} }
``` ```
6. Once the recording is complete, call **stop()** to stop the recording. 4. Once the recording is complete, call **stop()** to stop the recording.
```js
async function StopCapturer() {
let state = audioCapturer.state;
// The audio capturer can be stopped only when it is in STATE_RUNNING or STATE_PAUSED state.
if (state != audio.AudioState.STATE_RUNNING && state != audio.AudioState.STATE_PAUSED) {
console.info('AudioRecLog: Capturer is not running or paused');
return;
}
```
await audioCapturer.stop(); await audioCapturer.stop();
if (audioCapturer.state == audio.AudioState.STATE_STOPPED) {
state = audioCapturer.state;
if (state == audio.AudioState.STATE_STOPPED) {
console.info('AudioRecLog: Capturer stopped'); console.info('AudioRecLog: Capturer stopped');
} else { } else {
console.info('AudioRecLog: Capturer stop failed'); console.error('AudioRecLog: Capturer stop failed');
}
} }
``` ```
7. After the task is complete, call **release()** to release related resources. 5. After the task is complete, call **release()** to release related resources.
```js ```js
async function releaseCapturer() {
let state = audioCapturer.state;
// The audio capturer can be released only when it is not in the STATE_RELEASED or STATE_NEW state.
if (state == audio.AudioState.STATE_RELEASED || state == audio.AudioState.STATE_NEW) {
console.info('AudioRecLog: Capturer already released');
return;
}
await audioCapturer.release(); await audioCapturer.release();
if (audioCapturer.state == audio.AudioState.STATE_RELEASED) {
state = audioCapturer.state;
if (state == audio.AudioState.STATE_RELEASED) {
console.info('AudioRecLog: Capturer released'); console.info('AudioRecLog: Capturer released');
} else { } else {
console.info('AudioRecLog: Capturer release failed'); console.info('AudioRecLog: Capturer release failed');
} }
}
```
6. (Optional) Obtain the audio capturer information.
You can use the following code to obtain the audio capturer information:
```js
// Obtain the audio capturer state.
let state = audioCapturer.state;
// Obtain the audio capturer information.
let audioCapturerInfo : audio.AuduioCapturerInfo = await audioCapturer.getCapturerInfo();
// Obtain the audio stream information.
let audioStreamInfo : audio.AudioStreamInfo = await audioCapturer.getStreamInfo();
// Obtain the audio stream ID.
let audioStreamId : number = await audioCapturer.getAudioStreamId();
// Obtain the Unix timestamp, in nanoseconds.
let audioTime : number = await audioCapturer.getAudioTime();
// Obtain a proper minimum buffer size.
let bufferSize : number = await audioCapturer.getBuffersize();
```
7. (Optional) Use **on('markReach')** to subscribe to the mark reached event, and use **off('markReach')** to unsubscribe from the event.
After the mark reached event is subscribed to, when the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned.
```js
audioCapturer.on('markReach', (reachNumber) => {
console.info('Mark reach event Received');
console.info(`The Capturer reached frame: ${reachNumber}`);
});
audioCapturer.off('markReach'); // Unsubscribe from the mark reached event. This event will no longer be listened for.
```
8. (Optional) Use **on('periodReach')** to subscribe to the period reached event, and use **off('periodReach')** to unsubscribe from the event.
After the period reached event is subscribed to, each time the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned.
```js
audioCapturer.on('periodReach', (reachNumber) => {
console.info('Period reach event Received');
console.info(`In this period, the Capturer reached frame: ${reachNumber}`);
});
audioCapturer.off('periodReach'); // Unsubscribe from the period reached event. This event will no longer be listened for.
```
9. If your application needs to perform some operations when the audio capturer state is updated, it can subscribe to the state change event. When the audio capturer state is updated, the application receives a callback containing the event type.
```js
audioCapturer.on('stateChange', (state) => {
console.info(`AudioCapturerLog: Changed State to : ${state}`)
switch (state) {
case audio.AudioState.STATE_PREPARED:
console.info('--------CHANGE IN AUDIO STATE----------PREPARED--------------');
console.info('Audio State is : Prepared');
break;
case audio.AudioState.STATE_RUNNING:
console.info('--------CHANGE IN AUDIO STATE----------RUNNING--------------');
console.info('Audio State is : Running');
break;
case audio.AudioState.STATE_STOPPED:
console.info('--------CHANGE IN AUDIO STATE----------STOPPED--------------');
console.info('Audio State is : stopped');
break;
case audio.AudioState.STATE_RELEASED:
console.info('--------CHANGE IN AUDIO STATE----------RELEASED--------------');
console.info('Audio State is : released');
break;
default:
console.info('--------CHANGE IN AUDIO STATE----------INVALID--------------');
console.info('Audio State is : invalid');
break;
}
});
``` ```
# Audio Interruption Mode Development # Audio Interruption Mode Development
## When to Use ## Introduction
The audio interruption mode is used to control the playback of multiple audio streams.<br> The audio interruption mode is used to control the playback of multiple audio streams.
Audio applications can set the audio interruption mode to independent or shared under **AudioRenderer**.<br>
In shared mode, multiple audio streams share one session ID. In independent mode, each audio stream has an independent session ID. Audio applications can set the audio interruption mode to independent or shared under **AudioRenderer**.
### Asynchronous Operations In shared mode, multiple audio streams share one session ID. In independent mode, each audio stream has an independent session ID.
To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions. **Asynchronous operation**: To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions.
## How to Develop ## How to Develop
For details about the APIs, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8). For details about the APIs, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8).
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.<br> Set parameters of the **AudioRenderer** instance in **audioRendererOptions**.
Set parameters of the **AudioRenderer** instance in **audioRendererOptions**.<br>
This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.<br>
```js This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.
```js
import audio from '@ohos.multimedia.audio'; import audio from '@ohos.multimedia.audio';
var audioStreamInfo = { var audioStreamInfo = {
...@@ -39,7 +40,7 @@ For details about the APIs, see [AudioRenderer in Audio Management](../reference ...@@ -39,7 +40,7 @@ For details about the APIs, see [AudioRenderer in Audio Management](../reference
rendererInfo: audioRendererInfo rendererInfo: audioRendererInfo
} }
let audioRenderer = await audio.createAudioRenderer(audioRendererOptions); let audioRenderer = await audio.createAudioRenderer(audioRendererOptions);
``` ```
2. Set the audio interruption mode. 2. Set the audio interruption mode.
......
# Audio Playback Development # Audio Playback Development
## When to Use ## Introduction
You can use audio playback APIs to convert audio data into audible analog signals, play the signals using output devices, and manage playback tasks. You can use audio playback APIs to convert audio data into audible analog signals and play the signals using output devices. You can also manage playback tasks. For example, you can start, suspend, stop playback, release resources, set the volume, seek to a playback position, and obtain track information.
**Figure 1** Playback status ## Working Principles
The following figures show the audio playback status changes and the interaction with external modules for audio playback.
**Figure 1** Audio playback state transition
![en-us_image_audio_state_machine](figures/en-us_image_audio_state_machine.png) ![en-us_image_audio_state_machine](figures/en-us_image_audio_state_machine.png)
**Note**: If the status is **Idle**, setting the **src** attribute does not change the status. In addition, after the **src** attribute is set successfully, you must call **reset()** before setting it to another value. **NOTE**: If the status is **Idle**, setting the **src** attribute does not change the status. In addition, after the **src** attribute is set successfully, you must call **reset()** before setting it to another value.
**Figure 2** Layer 0 diagram of audio playback **Figure 2** Interaction with external modules for audio playback
![en-us_image_audio_player](figures/en-us_image_audio_player.png) ![en-us_image_audio_player](figures/en-us_image_audio_player.png)
**NOTE**: When a third-party application calls the JS interface provided by the JS interface layer to implement a feature, the framework layer invokes the audio component through the media service of the native framework and outputs the audio data decoded by the software to the audio HDI of the hardware interface layer to implement audio playback.
## How to Develop ## How to Develop
For details about the APIs, see [AudioPlayer in the Media API](../reference/apis/js-apis-media.md#audioplayer). For details about the APIs, see [AudioPlayer in the Media API](../reference/apis/js-apis-media.md#audioplayer).
> **NOTE**
>
> The method for obtaining the path in the FA model is different from that in the stage model. **pathDir** used in the sample code below is an example. You need to obtain the path based on project requirements. For details about how to obtain the path, see [Application Sandbox Path Guidelines](../reference/apis/js-apis-fileio.md#guidelines).
### Full-Process Scenario ### Full-Process Scenario
The full audio playback process includes creating an instance, setting the URI, playing audio, seeking to the playback position, setting the volume, pausing playback, obtaining track information, stopping playback, resetting the player, and releasing resources. The full audio playback process includes creating an instance, setting the URI, playing audio, seeking to the playback position, setting the volume, pausing playback, obtaining track information, stopping playback, resetting the player, and releasing resources.
...@@ -99,8 +109,9 @@ async function audioPlayerDemo() { ...@@ -99,8 +109,9 @@ async function audioPlayerDemo() {
setCallBack(audioPlayer); // Set the event callbacks. setCallBack(audioPlayer); // Set the event callbacks.
// 2. Set the URI of the audio file. // 2. Set the URI of the audio file.
let fdPath = 'fd://' let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command. let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3'; // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => { await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber; fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath); console.info('open fd success fd is' + fdPath);
...@@ -118,6 +129,7 @@ async function audioPlayerDemo() { ...@@ -118,6 +129,7 @@ async function audioPlayerDemo() {
```js ```js
import media from '@ohos.multimedia.media' import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio' import fileIO from '@ohos.fileio'
export class AudioDemo { export class AudioDemo {
// Set the player callbacks. // Set the player callbacks.
setCallBack(audioPlayer) { setCallBack(audioPlayer) {
...@@ -139,8 +151,9 @@ export class AudioDemo { ...@@ -139,8 +151,9 @@ export class AudioDemo {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance. let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks. this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://' let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command. let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3'; // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => { await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber; fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath); console.info('open fd success fd is' + fdPath);
...@@ -159,6 +172,7 @@ export class AudioDemo { ...@@ -159,6 +172,7 @@ export class AudioDemo {
```js ```js
import media from '@ohos.multimedia.media' import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio' import fileIO from '@ohos.fileio'
export class AudioDemo { export class AudioDemo {
// Set the player callbacks. // Set the player callbacks.
private isNextMusic = false; private isNextMusic = false;
...@@ -185,8 +199,9 @@ export class AudioDemo { ...@@ -185,8 +199,9 @@ export class AudioDemo {
async nextMusic(audioPlayer) { async nextMusic(audioPlayer) {
this.isNextMusic = true; this.isNextMusic = true;
let nextFdPath = 'fd://' let nextFdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\02.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command. let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
let nextpath = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/02.mp3'; // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\02.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let nextpath = pathDir + '/02.mp3'
await fileIO.open(nextpath).then((fdNumber) => { await fileIO.open(nextpath).then((fdNumber) => {
nextFdPath = nextFdPath + '' + fdNumber; nextFdPath = nextFdPath + '' + fdNumber;
console.info('open fd success fd is' + nextFdPath); console.info('open fd success fd is' + nextFdPath);
...@@ -202,8 +217,9 @@ export class AudioDemo { ...@@ -202,8 +217,9 @@ export class AudioDemo {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance. let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks. this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://' let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command. let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3'; // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => { await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber; fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath); console.info('open fd success fd is' + fdPath);
...@@ -222,6 +238,7 @@ export class AudioDemo { ...@@ -222,6 +238,7 @@ export class AudioDemo {
```js ```js
import media from '@ohos.multimedia.media' import media from '@ohos.multimedia.media'
import fileIO from '@ohos.fileio' import fileIO from '@ohos.fileio'
export class AudioDemo { export class AudioDemo {
// Set the player callbacks. // Set the player callbacks.
setCallBack(audioPlayer) { setCallBack(audioPlayer) {
...@@ -239,8 +256,9 @@ export class AudioDemo { ...@@ -239,8 +256,9 @@ export class AudioDemo {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance. let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks. this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://' let fdPath = 'fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command. let pathDir = "/data/storage/el2/base/haps/entry/files" // The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile/01.mp3'; // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
await fileIO.open(path).then((fdNumber) => { await fileIO.open(path).then((fdNumber) => {
fdPath = fdPath + '' + fdNumber; fdPath = fdPath + '' + fdNumber;
console.info('open fd success fd is' + fdPath); console.info('open fd success fd is' + fdPath);
......
# Audio Recording Development # Audio Recording Development
## When to Use ## Introduction
During audio recording, audio signals are captured, encoded, and saved to files. You can specify parameters such as the sampling rate, number of audio channels, encoding format, encapsulation format, and file path for audio recording. During audio recording, audio signals are captured, encoded, and saved to files. You can specify parameters such as the sampling rate, number of audio channels, encoding format, encapsulation format, and output file path for audio recording.
## Working Principles
The following figures show the audio recording state transition and the interaction with external modules for audio recording.
**Figure 1** Audio recording state transition **Figure 1** Audio recording state transition
...@@ -10,10 +14,16 @@ During audio recording, audio signals are captured, encoded, and saved to files. ...@@ -10,10 +14,16 @@ During audio recording, audio signals are captured, encoded, and saved to files.
**Figure 2** Layer 0 diagram of audio recording **Figure 2** Interaction with external modules for audio recording
![en-us_image_audio_recorder_zero](figures/en-us_image_audio_recorder_zero.png) ![en-us_image_audio_recorder_zero](figures/en-us_image_audio_recorder_zero.png)
**NOTE**: When a third-party recording application or recorder calls the JS interface provided by the JS interface layer to implement a feature, the framework layer invokes the audio component through the media service of the native framework to obtain the audio data captured through the audio HDI. The framework layer then encodes the audio data through software and saves the encoded and encapsulated audio data to a file to implement audio recording.
## Constraints
Before developing audio recording, configure the **ohos.permission.MICROPHONE** permission for your application. For details about the configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop ## How to Develop
For details about the APIs, see [AudioRecorder in the Media API](../reference/apis/js-apis-media.md#audiorecorder). For details about the APIs, see [AudioRecorder in the Media API](../reference/apis/js-apis-media.md#audiorecorder).
......
# Audio Stream Management Development # Audio Stream Management Development
## When to Use ## Introduction
You can use **AudioStreamManager** to manage audio streams. You can use **AudioStreamManager** to manage audio streams.
### Development Process ## Working Principles
During application development, use **getStreamManager()** to create an **AudioStreamManager** instance. Then, you can call **on('audioRendererChange')** or **on('audioCapturerChange')** to listen for status, client, and audio attribute changes of the audio playback or recording application. To cancel the listening for these changes, call **off('audioRendererChange')** or **off('audioCapturerChange')**. You can call **getCurrentAudioRendererInfoArray()** to obtain information about the audio playback application, such as the unique audio stream ID, UID of the audio playback client, and audio status. Similarly, you can call **getCurrentAudioCapturerInfoArray()** to obtain information about the audio recording application. The figure below shows the invoking relationship. The following figure shows the calling relationship of **AudioStreamManager** APIs.
For details about the APIs, see [AudioStreamManager](../reference/apis/js-apis-audio.md#audiostreammanager9). **Figure 1** AudioStreamManager API calling relationship
**Figure 1** Audio stream management invoking relationship ![en-us_image_audio_stream_manager](figures/en-us_image_audio_stream_manager.png)
![](figures/en-us_image_audio_stream_manager.png) **NOTE**: During application development, use **getStreamManager()** to create an **AudioStreamManager** instance. Then, you can call **on('audioRendererChange')** or **on('audioCapturerChange')** to listen for status, client, and audio attribute changes of the audio playback or recording application. To cancel the listening for these changes, call **off('audioRendererChange')** or **off('audioCapturerChange')**. You can call **getCurrentAudioRendererInfoArray()** to obtain information about the audio playback application, such as the unique audio stream ID, UID of the audio playback client, and audio status. Similarly, you can call **getCurrentAudioCapturerInfoArray()** to obtain information about the audio recording application.
## How to Develop ## How to Develop
For details about the APIs, see [AudioStreamManager](../reference/apis/js-apis-audio.md#audiostreammanager9).
1. Create an **AudioStreamManager** instance. 1. Create an **AudioStreamManager** instance.
Before using **AudioStreamManager** APIs, you must use **getStreamManager()** to create an **AudioStreamManager** instance. Before using **AudioStreamManager** APIs, you must use **getStreamManager()** to create an **AudioStreamManager** instance.
```js ```js
var audioStreamManager = audio.getStreamManager(); var audioManager = audio.getAudioManager();
var audioStreamManager = audioManager.getStreamManager();
``` ```
2. (Optional) Call **on('audioRendererChange')** to listen for audio renderer changes. 2. (Optional) Call **on('audioRendererChange')** to listen for audio renderer changes.
If an application needs to receive notifications when the audio playback application status, audio playback client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
If an application needs to receive notifications when the audio playback application status, audio playback client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
```js ```js
audioStreamManager.on('audioRendererChange', (AudioRendererChangeInfoArray) => { audioStreamManager.on('audioRendererChange', (AudioRendererChangeInfoArray) => {
...@@ -61,7 +65,8 @@ If an application needs to receive notifications when the audio playback applica ...@@ -61,7 +65,8 @@ If an application needs to receive notifications when the audio playback applica
``` ```
4. (Optional) Call **on('audioCapturerChange')** to listen for audio capturer changes. 4. (Optional) Call **on('audioCapturerChange')** to listen for audio capturer changes.
If an application needs to receive notifications when the audio recording application status, audio recording client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
If an application needs to receive notifications when the audio recording application status, audio recording client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
```js ```js
audioStreamManager.on('audioCapturerChange', (AudioCapturerChangeInfoArray) => { audioStreamManager.on('audioCapturerChange', (AudioCapturerChangeInfoArray) => {
...@@ -94,7 +99,8 @@ If an application needs to receive notifications when the audio recording applic ...@@ -94,7 +99,8 @@ If an application needs to receive notifications when the audio recording applic
``` ```
6. (Optional) Call **getCurrentAudioRendererInfoArray()** to obtain information about the current audio renderer. 6. (Optional) Call **getCurrentAudioRendererInfoArray()** to obtain information about the current audio renderer.
This API can be used to obtain the unique ID of the audio stream, UID of the audio playback client, audio status, and other information about the audio player. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly.
This API can be used to obtain the unique ID of the audio stream, UID of the audio playback client, audio status, and other information about the audio player. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly.
```js ```js
await audioStreamManager.getCurrentAudioRendererInfoArray().then( function (AudioRendererChangeInfoArray) { await audioStreamManager.getCurrentAudioRendererInfoArray().then( function (AudioRendererChangeInfoArray) {
...@@ -127,7 +133,7 @@ This API can be used to obtain the unique ID of the audio stream, UID of the aud ...@@ -127,7 +133,7 @@ This API can be used to obtain the unique ID of the audio stream, UID of the aud
``` ```
7. (Optional) Call **getCurrentAudioCapturerInfoArray()** to obtain information about the current audio capturer. 7. (Optional) Call **getCurrentAudioCapturerInfoArray()** to obtain information about the current audio capturer.
This API can be used to obtain the unique ID of the audio stream, UID of the audio recording client, audio status, and other information about the audio capturer. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly. This API can be used to obtain the unique ID of the audio stream, UID of the audio recording client, audio status, and other information about the audio capturer. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly.
```js ```js
await audioStreamManager.getCurrentAudioCapturerInfoArray().then( function (AudioCapturerChangeInfoArray) { await audioStreamManager.getCurrentAudioCapturerInfoArray().then( function (AudioCapturerChangeInfoArray) {
......
此差异已折叠。
# OpenSL ES Audio Recording Development # OpenSL ES Audio Recording Development
## When to Use ## Introduction
You can use OpenSL ES to develop the audio recording function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned. You can use OpenSL ES to develop the audio recording function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned.
......
# OpenSL ES Audio Playback Development # OpenSL ES Audio Playback Development
## When to Use ## Introduction
You can use OpenSL ES to develop the audio playback function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned. You can use OpenSL ES to develop the audio playback function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned.
...@@ -58,7 +58,7 @@ To use OpenSL ES to develop the audio playback function in OpenHarmony, perform ...@@ -58,7 +58,7 @@ To use OpenSL ES to develop the audio playback function in OpenHarmony, perform
5. Obtain the **bufferQueueItf** instance of the **SL_IID_OH_BUFFERQUEUE** interface. 5. Obtain the **bufferQueueItf** instance of the **SL_IID_OH_BUFFERQUEUE** interface.
``` ```c++
SLOHBufferQueueItf bufferQueueItf; SLOHBufferQueueItf bufferQueueItf;
(*pcmPlayerObject)->GetInterface(pcmPlayerObject, SL_IID_OH_BUFFERQUEUE, &bufferQueueItf); (*pcmPlayerObject)->GetInterface(pcmPlayerObject, SL_IID_OH_BUFFERQUEUE, &bufferQueueItf);
``` ```
......
# Video Playback Development # Video Playback Development
## When to Use ## Introduction
You can use video playback APIs to convert video data into visible signals, play the signals using output devices, and manage playback tasks. This document describes development for the following video playback scenarios: full-process, normal playback, video switching, and loop playback. You can use video playback APIs to convert audio data into audible analog signals and play the signals using output devices. You can also manage playback tasks. For example, you can start, suspend, stop playback, release resources, set the volume, seek to a playback position, set the playback speed, and obtain track information. This document describes development for the following video playback scenarios: full-process, normal playback, video switching, and loop playback.
## Working Principles
The following figures show the video playback state transition and the interaction with external modules for video playback.
**Figure 1** Video playback state transition **Figure 1** Video playback state transition
![en-us_image_video_state_machine](figures/en-us_image_video_state_machine.png) ![en-us_image_video_state_machine](figures/en-us_image_video_state_machine.png)
**Figure 2** Layer 0 diagram of video playback **Figure 2** Interaction with external modules for video playback
![en-us_image_video_player](figures/en-us_image_video_player.png) ![en-us_image_video_player](figures/en-us_image_video_player.png)
**NOTE**: When a third-party application calls a JS interface provided by the JS interface layer, the framework layer invokes the audio component through the media service of the native framework to output the audio data decoded by the software to the audio HDI. The graphics subsystem outputs the image data decoded by the codec HDI at the hardware interface layer to the display HDI. In this way, video playback is implemented.
*Note: Video playback requires hardware capabilities such as display, audio, and codec.* *Note: Video playback requires hardware capabilities such as display, audio, and codec.*
1. A third-party application obtains a surface ID from the XComponent. 1. A third-party application obtains a surface ID from the XComponent.
......
# Video Recording Development # Video Recording Development
## When to Use ## Introduction
During video recording, audio and video signals are captured, encoded, and saved to files. You can specify parameters such as the encoding format, encapsulation format, and file path for video recording. You can use video recording APIs to capture audio and video signals, encode them, and save them to files. You can start, suspend, resume, and stop recording, and release resources. You can also specify parameters such as the encoding format, encapsulation format, and file path for video recording.
## Working Principles
The following figures show the video recording state transition and the interaction with external modules for video recording.
**Figure 1** Video recording state transition **Figure 1** Video recording state transition
![en-us_image_video_recorder_state_machine](figures/en-us_image_video_recorder_state_machine.png) ![en-us_image_video_recorder_state_machine](figures/en-us_image_video_recorder_state_machine.png)
**Figure 2** Layer 0 diagram of video recording **Figure 2** Interaction with external modules for video recording
![en-us_image_video_recorder_zero](figures/en-us_image_video_recorder_zero.png) ![en-us_image_video_recorder_zero](figures/en-us_image_video_recorder_zero.png)
**NOTE**: When a third-party camera application or system camera calls a JS interface provided by the JS interface layer, the framework layer uses the media service of the native framework to invoke the audio component. Through the audio HDI, the audio component captures audio data, encodes the audio data through software, and saves the encoded audio data to a file. The graphics subsystem captures image data through the video HDI, encodes the image data through the video codec HDI, and saves the encoded image data to a file. In this way, video recording is implemented.
## Constraints
Before developing video recording, configure the permissions **ohos.permission.MICROPHONE** and **ohos.permission.CAMERA** for your application. For details about the configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop ## How to Develop
For details about the APIs, see [VideoRecorder in the Media API](../reference/apis/js-apis-media.md#videorecorder9). For details about the APIs, see [VideoRecorder in the Media API](../reference/apis/js-apis-media.md#videorecorder9).
...@@ -147,3 +157,4 @@ export class VideoRecorderDemo { ...@@ -147,3 +157,4 @@ export class VideoRecorderDemo {
} }
} }
``` ```
...@@ -4,6 +4,4 @@ ...@@ -4,6 +4,4 @@
- [Drawing Development](drawing-guidelines.md) - [Drawing Development](drawing-guidelines.md)
- [Raw File Development](rawfile-guidelines.md) - [Raw File Development](rawfile-guidelines.md)
- [Native Window Development](native-window-guidelines.md) - [Native Window Development](native-window-guidelines.md)
- [Using MindSpore Lite for Model Inference](native-window-guidelines.md) - [Using MindSpore Lite for Model Inference](mindspore-lite-guidelines.md)
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
The Native Drawing module provides APIs for drawing 2D graphics and text. The following scenarios are common for drawing development: The Native Drawing module provides APIs for drawing 2D graphics and text. The following scenarios are common for drawing development:
* Drawing 2D graphics * Drawing 2D graphics
* Drawing and painting text * Drawing text drawing
## Available APIs ## Available APIs
...@@ -19,7 +19,7 @@ The Native Drawing module provides APIs for drawing 2D graphics and text. The fo ...@@ -19,7 +19,7 @@ The Native Drawing module provides APIs for drawing 2D graphics and text. The fo
| OH_Drawing_CanvasDrawPath (OH_Drawing_Canvas *, const OH_Drawing_Path *) | Draws a path.| | OH_Drawing_CanvasDrawPath (OH_Drawing_Canvas *, const OH_Drawing_Path *) | Draws a path.|
| OH_Drawing_PathCreate (void) | Creates a path object.| | OH_Drawing_PathCreate (void) | Creates a path object.|
| OH_Drawing_PathMoveTo (OH_Drawing_Path *, float x, float y) | Sets the start point of a path.| | OH_Drawing_PathMoveTo (OH_Drawing_Path *, float x, float y) | Sets the start point of a path.|
| OH_Drawing_PathLineTo (OH_Drawing_Path *, float x, float y) | Draws a line segment from the last point of a path to the target point. | | OH_Drawing_PathLineTo (OH_Drawing_Path *, float x, float y) | Draws a line segment from the last point of a path to the target point.|
| OH_Drawing_PathClose (OH_Drawing_Path *) | Closes a path. A line segment from the start point to the last point of the path is added.| | OH_Drawing_PathClose (OH_Drawing_Path *) | Closes a path. A line segment from the start point to the last point of the path is added.|
| OH_Drawing_PenCreate (void) | Creates a pen object.| | OH_Drawing_PenCreate (void) | Creates a pen object.|
| OH_Drawing_PenSetAntiAlias (OH_Drawing_Pen *, bool) | Checks whether anti-aliasing is enabled for a pen. If anti-aliasing is enabled, edges will be drawn with partial transparency.| | OH_Drawing_PenSetAntiAlias (OH_Drawing_Pen *, bool) | Checks whether anti-aliasing is enabled for a pen. If anti-aliasing is enabled, edges will be drawn with partial transparency.|
...@@ -138,7 +138,7 @@ The following steps describe how to use the canvas and brush of the Native Drawi ...@@ -138,7 +138,7 @@ The following steps describe how to use the canvas and brush of the Native Drawi
OH_Drawing_BitmapDestory(cBitmap); OH_Drawing_BitmapDestory(cBitmap);
``` ```
## Development Procedure for Text Drawing and Display ## Development Procedure for Text Drawing
The following steps describe how to use the text drawing and display feature of the Native Drawing module. The following steps describe how to use the text drawing and display feature of the Native Drawing module.
1. **Create a canvas and a bitmap.** 1. **Create a canvas and a bitmap.**
...@@ -196,7 +196,8 @@ The following steps describe how to use the text drawing and display feature of ...@@ -196,7 +196,8 @@ The following steps describe how to use the text drawing and display feature of
// Set the maximum width. // Set the maximum width.
double maxWidth = 800.0; double maxWidth = 800.0;
OH_Drawing_TypographyLayout(typography, maxWidth); OH_Drawing_TypographyLayout(typography, maxWidth);
// Set the start position for text display. // Set the start position for drawing the text on the canvas.
double position[2] = {10.0, 15.0}; double position[2] = {10.0, 15.0};
// Draw the text on the canvas.
OH_Drawing_TypographyPaint(typography, cCanvas, position[0], position[1]); OH_Drawing_TypographyPaint(typography, cCanvas, position[0], position[1]);
``` ```
# NativeWindow Development # Native Window Development
## When to Use ## When to Use
`NativeWindow` is a local platform window of OpenHarmony. It provides APIs for you to create a native window from `Surface`, create a native window buffer from `SurfaceBuffer`, and request and flush a buffer. **NativeWindow** is a local platform-based window of OpenHarmony that represents the producer of a graphics queue. It provides APIs for you to create a native window from **Surface**, create a native window buffer from **SurfaceBuffer**, and request and flush a buffer.
The following scenarios are common for native window development: The following scenarios are common for native window development:
* Drawing content using native C++ code and displaying the content on the screen * Request a graphics buffer by using the NAPI provided by **NativeWindow**, write the produced graphics content to the buffer, and flush the buffer to the graphics queue.
* Requesting and flushing a buffer when adapting to EGL `eglswapbuffer` * Request and flush a buffer when adapting to the **eglswapbuffer** interface at the EGL.
## Available APIs ## Available APIs
| API| Description| | API| Description|
| -------- | -------- | | -------- | -------- |
| OH_NativeWindow_CreateNativeWindowFromSurface (void \*pSurface) | Creates a `NativeWindow` instance. A new `NativeWindow` instance is created each time this function is called.| | OH_NativeWindow_CreateNativeWindowFromSurface (void \*pSurface) | Creates a **NativeWindow** instance. A new **NativeWindow** instance is created each time this function is called.|
| OH_NativeWindow_DestroyNativeWindow (struct NativeWindow \*window) | Decreases the reference count of a `NativeWindow` instance by 1 and, when the reference count reaches 0, destroys the instance.| | OH_NativeWindow_DestroyNativeWindow (OHNativeWindow \*window) | Decreases the reference count of a **NativeWindow** instance by 1 and, when the reference count reaches 0, destroys the instance.|
| OH_NativeWindow_CreateNativeWindowBufferFromSurfaceBuffer (void \*pSurfaceBuffer) | Creates a `NativeWindowBuffer` instance. A new `NativeWindowBuffer` instance is created each time this function is called.| | OH_NativeWindow_CreateNativeWindowBufferFromSurfaceBuffer (void \*pSurfaceBuffer) | Creates a **NativeWindowBuffer** instance. A new **NativeWindowBuffer** instance is created each time this function is called.|
| OH_NativeWindow_DestroyNativeWindowBuffer (struct NativeWindowBuffer \*buffer) | Decreases the reference count of a `NativeWindowBuffer` instance by 1 and, when the reference count reaches 0, destroys the instance.| | OH_NativeWindow_DestroyNativeWindowBuffer (OHNativeWindowBuffer \*buffer) | Decreases the reference count of a **NativeWindowBuffer** instance by 1 and, when the reference count reaches 0, destroys the instance.|
| OH_NativeWindow_NativeWindowRequestBuffer (struct NativeWindow \*window struct NativeWindowBuffer \*\*buffer, int \*fenceFd) | Requests a `NativeWindowBuffer` through a `NativeWindow` instance for content production.| | OH_NativeWindow_NativeWindowRequestBuffer (OHNativeWindow \*window, OHNativeWindowBuffer \*\*buffer, int \*fenceFd) | Requests a **NativeWindowBuffer** through a **NativeWindow** instance for content production.|
| OH_NativeWindow_NativeWindowFlushBuffer (struct NativeWindow \*window, struct NativeWindowBuffer \*buffer, int fenceFd, Region region) | Flushes the `NativeWindowBuffer` filled with the content to the buffer queue through a `NativeWindow` instance for content consumption.| | OH_NativeWindow_NativeWindowFlushBuffer (OHNativeWindow \*window, OHNativeWindowBuffer \*buffer, int fenceFd, Region region) | Flushes the **NativeWindowBuffer** filled with the content to the buffer queue through a **NativeWindow** instance for content consumption.|
| OH_NativeWindow_NativeWindowCancelBuffer (struct NativeWindow \*window, struct NativeWindowBuffer \*buffer) | Returns the `NativeWindowBuffer` to the buffer queue through a `NativeWindow` instance, without filling in any content. The `NativeWindowBuffer` can be used for another request.| | OH_NativeWindow_NativeWindowAbortBuffer (OHNativeWindow \*window, OHNativeWindowBuffer \*buffer) | Returns the **NativeWindowBuffer** to the buffer queue through a **NativeWindow** instance, without filling in any content. The **NativeWindowBuffer** can be used for another request.|
| OH_NativeWindow_NativeWindowHandleOpt (struct NativeWindow \*window, int code,...) | Sets or obtains the attributes of a native window, including the width, height, and content format.| | OH_NativeWindow_NativeWindowHandleOpt (OHNativeWindow \*window, int code,...) | Sets or obtains the attributes of a native window, including the width, height, and content format.|
| OH_NativeWindow_GetBufferHandleFromNative (struct NativeWindowBuffer \*buffer) | Obtains the pointer to a `BufferHandle` of a `NativeWindowBuffer` instance.| | OH_NativeWindow_GetBufferHandleFromNative (OHNativeWindowBuffer \*buffer) | Obtains the pointer to a **BufferHandle** of a **NativeWindowBuffer** instance.|
| OH_NativeWindow_NativeObjectReference (void \*obj) | Adds the reference count of a native object.| | OH_NativeWindow_NativeObjectReference (void \*obj) | Adds the reference count of a native object.|
| OH_NativeWindow_NativeObjectUnreference (void \*obj) | Decreases the reference count of a native object and, when the reference count reaches 0, destroys this object.| | OH_NativeWindow_NativeObjectUnreference (void \*obj) | Decreases the reference count of a native object and, when the reference count reaches 0, destroys this object.|
| OH_NativeWindow_GetNativeObjectMagic (void \*obj) | Obtains the magic ID of a native object.| | OH_NativeWindow_GetNativeObjectMagic (void \*obj) | Obtains the magic ID of a native object.|
| OH_NativeWindow_NativeWindowSetScalingMode (OHNativeWindow \*window, uint32_t sequence, OHScalingMode scalingMode) | Sets the scaling mode of the native window.|
| OH_NativeWindow_NativeWindowSetMetaData(OHNativeWindow \*window, uint32_t sequence, int32_t size, const OHHDRMetaData \*metaData) | Sets the HDR static metadata of the native window.|
| OH_NativeWindow_NativeWindowSetMetaDataSet(OHNativeWindow \*window, uint32_t sequence, OHHDRMetadataKey key, int32_t size, const uint8_t \*metaData) | Sets the HDR static metadata set of the native window.|
| OH_NativeWindow_NativeWindowSetTunnelHandle(OHNativeWindow \*window, const OHExtDataHandle \*handle) | Sets the tunnel handle to the native window.|
## How to Develop ## How to Develop
The following steps describe how to use `OH_NativeXComponent` in OpenHarmony to draw content using native C++ code and display the content on the screen. The following describes how to use the NAPI provided by **NativeWindow** to request a graphics buffer, write the produced graphics content to the buffer, and flush the buffer to the graphics queue.
1. Define an `XComponent` of the `texture` type in `index.ets` for content display. 1. Obtain a **NativeWindow** instance. For example, use **Surface** to create a **NativeWindow** instance.
```js ```c++
XComponent({ id: 'xcomponentId', type: 'texture', libraryname: 'nativerender'}) sptr<OHOS::Surface> cSurface = Surface::CreateSurfaceAsConsumer();
.borderColor(Color.Red) sptr<IBufferConsumerListener> listener = new BufferConsumerListenerTest();
.borderWidth(5) cSurface->RegisterConsumerListener(listener);
.onLoad(() => {}) sptr<OHOS::IBufferProducer> producer = cSurface->GetProducer();
.onDestroy(() => {}) sptr<OHOS::Surface> pSurface = Surface::CreateSurfaceAsProducer(producer);
OHNativeWindow* nativeWindow = OH_NativeWindow_CreateNativeWindow(&pSurface);
``` ```
2. Obtain an `OH_NativeXComponent` instance (named `nativeXComponent` in this example) by calling `napi_get_named_property`, and obtain a `NativeWindow` instance by registering the callback of the `OH_NativeXComponent` instance. 2. Set the attributes of a native window buffer by using **OH_NativeWindow_NativeWindowHandleOpt**.
```c++ ```c++
// Define a NAPI instance. // Set the read and write scenarios of the native window buffer.
napi_value exportInstance = nullptr; int code = SET_USAGE;
// Define an OH_NativeXComponent instance. int32_t usage = BUFFER_USAGE_CPU_READ | BUFFER_USAGE_CPU_WRITE | BUFFER_USAGE_MEM_DMA;
OH_NativeXComponent *nativeXComponent = nullptr; int32_t ret = OH_NativeWindow_NativeWindowHandleOpt(nativeWindow, code, usage);
// Use the OH_NATIVE_XCOMPONENT_OBJ export instance. // Set the width and height of the native window buffer.
napi_getname_property(env, exports, OH_NATIVE_XCOMPONENT_OBJ, &exportInstance); code = SET_BUFFER_GEOMETRY;
// Convert the NAPI instance to the OH_NativeXComponent instance. int32_t width = 0x100;
napi_unwarp(env, exportInstance, reinterpret_cast<void**>(&nativeXComponent)); int32_t height = 0x100;
ret = OH_NativeWindow_NativeWindowHandleOpt(nativeWindow, code, width, height);
// Set the step of the native window buffer.
code = SET_STRIDE;
int32_t stride = 0x8;
ret = OH_NativeWindow_NativeWindowHandleOpt(nativeWindow, code, stride);
// Set the format of the native window buffer.
code = SET_FORMAT;
int32_t format = PIXEL_FMT_RGBA_8888;
ret = OH_NativeWindow_NativeWindowHandleOpt(nativeWindow, code, format);
``` ```
3. Define the callback `OnSurfaceCreated`. During the creation of a `Surface`, the callback is used to initialize the rendering environment, for example, the `Skia` rendering environment, and write the content to be displayed to `NativeWindow`. 3. Request a native window buffer from the graphics queue.
```c++ ```c++
void OnSurfaceCreatedCB(NativeXComponent* component, void* window) {
// Obtain the width and height of the native window.
uint64_t width_ = 0, height_ = 0;
OH_NativeXComponent_GetXComponentSize(nativeXComponent, window, &width_, &height_);
// Convert void* into a NativeWindow instance. NativeWindow is defined in native_window/external_window.h.
NativeWindow* nativeWindow_ = (NativeWindow*)(window);
// Set or obtain the NativeWindow attributes by calling OH_NativeWindow_NativeWindowHandleOpt.
// 1. Use SET_USAGE to set the usage attribute of the native window, for example, to HBM_USE_CPU_READ.
OH_NativeWindow_NativeWindowHandleOpt(nativeWindow_, SET_USAGE, HBM_USE_CPU_READ | HBM_USE_CPU_WRITE |HBM_USE_MEM_DMA);
// 2. Use SET_BUFFER_GEOMETRY to set the width and height attributes of the native window.
OH_NativeWindow_NativeWindowHandleOpt(nativeWindow_, SET_BUFFER_GEOMETRY, width_, height_);
// 3. Use SET_FORMAT to set the format attribute of the native window, for example, to PIXEL_FMT_RGBA_8888.
OH_NativeWindow_NativeWindowHandleOpt(nativeWindow_, SET_FORMAT, PIXEL_FMT_RGBA_8888);
// 4. Use SET_STRIDE to set the stride attribute of the native window.
OH_NativeWindow_NativeWindowHandleOpt(nativeWindow_, SET_STRIDE, 0x8);
// Obtain the NativeWindowBuffer instance by calling OH_NativeWindow_NativeWindowRequestBuffer.
struct NativeWindowBuffer* buffer = nullptr; struct NativeWindowBuffer* buffer = nullptr;
int fenceFd; int fenceFd;
// Obtain the NativeWindowBuffer instance by calling OH_NativeWindow_NativeWindowRequestBuffer.
OH_NativeWindow_NativeWindowRequestBuffer(nativeWindow_, &buffer, &fenceFd); OH_NativeWindow_NativeWindowRequestBuffer(nativeWindow_, &buffer, &fenceFd);
// Obtain the buffer handle by calling OH_NativeWindow_GetNativeBufferHandleFromNative. // Obtain the buffer handle by calling OH_NativeWindow_GetNativeBufferHandleFromNative.
BufferHandle* bufferHandle = OH_NativeWindow_GetNativeBufferHandleFromNative(buffer); BufferHandle* bufferHandle = OH_NativeWindow_GetNativeBufferHandleFromNative(buffer);
```
// Create a Skia bitmap using BufferHandle. 4. Write the produced content to the native window buffer.
SkBitmap bitmap; ```c++
SkImageInfo imageInfo = ... auto image = static_cast<uint8_t *>(buffer->sfbuffer->GetVirAddr());
bitmap.setInfo(imageInfo, bufferHandle->stride); static uint32_t value = 0x00;
bitmap.setPixels(bufferHandle->virAddr); value++;
// Create Skia Canvas and write the content to the native window.
... uint32_t *pixel = static_cast<uint32_t *>(image);
for (uint32_t x = 0; x < width; x++) {
// After the write operation is complete, flush the buffer by using OH_NativeWindow_NativeWindowFlushBuffer so that the data is displayed on the screen. for (uint32_t y = 0; y < height; y++) {
Region region{nullptr, 0}; *pixel++ = value;
OH_NativeWindow_NativeWindowFlushBuffer(nativeWindow_, buffer, fenceFd, region) }
} }
``` ```
4. Register the callback `OnSurfaceCreated` by using `OH_NativeXComponent_RegisterCallback`. 5. Flush the native window buffer to the graphics queue.
```c++ ```c++
OH_NativeXComponent_Callback &callback_; // Set the refresh region. If Rect in Region is a null pointer or rectNumber is 0, all contents in the native window buffer are changed.
callback_->OnSurfaceCreated = OnSurfaceCreatedCB; Region region{nullptr, 0};
callback_->OnSurfaceChanged = OnSurfaceChangedCB; // Flush the buffer to the consumer through OH_NativeWindow_NativeWindowFlushBuffer, for example, by displaying it on the screen.
callback_->OnSurfaceDestoryed = OnSurfaceDestoryedCB; OH_NativeWindow_NativeWindowFlushBuffer(nativeWindow_, buffer, fenceFd, region);
callback_->DispatchTouchEvent = DispatchTouchEventCB;
OH_NativeXComponent_RegisterCallback(nativeXComponent, callback_)
``` ```
...@@ -2,11 +2,12 @@ ...@@ -2,11 +2,12 @@
- Getting Started - Getting Started
- [Preparations](start-overview.md) - [Preparations](start-overview.md)
- [Getting Started with eTS in Stage Model](start-with-ets-stage.md) - [Getting Started with ArkTS in Stage Model](start-with-ets-stage.md)
- [Getting Started with eTS in FA Model](start-with-ets-fa.md) - [Getting Started with ArkTS in FA Model](start-with-ets-fa.md)
- [Getting Started with JavaScript in FA Model](start-with-js-fa.md) - [Getting Started with JavaScript in FA Model](start-with-js-fa.md)
- Development Fundamentals - Development Fundamentals
- [Application Package Structure Configuration File (FA Model)](package-structure.md) - [Application Package Structure Configuration File (FA Model)](package-structure.md)
- [Application Package Structure Configuration File (Stage Model)](stage-structure.md) - [Application Package Structure Configuration File (Stage Model)](stage-structure.md)
- [SysCap](syscap.md) - [SysCap](syscap.md)
- [HarmonyAppProvision Configuration File](app-provision-structure.md) - [HarmonyAppProvision Configuration File](app-provision-structure.md)
# Rendering Control
ArkTS provides conditional rendering and loop rendering. Conditional rendering can render state-specific UI content based on the application status. Loop rendering iteratively obtains data from the data source and creates the corresponding component during each iteration.
## Conditional Rendering
Use **if/else** for conditional rendering.
> **NOTE**
>
> - State variables can be used in the **if/else** statement.
>
> - The **if/else** statement can be used to implement rendering of child components.
>
> - The **if/else** statement must be used in container components.
>
> - Some container components limit the type or number of subcomponents. When **if/else** is placed in these components, the limitation applies to components created in **if/else** statements. For example, when **if/else** is used in the **\<Grid>** container component, whose child components can only be **\<GridItem>**, only the **\<GridItem>** component can be used in the **if/else** statement.
```ts
Column() {
if (this.count < 0) {
Text('count is negative').fontSize(14)
} else if (this.count % 2 === 0) {
Text('count is even').fontSize(14)
} else {
Text('count is odd').fontSize(14)
}
}
```
## Loop Rendering
You can use **ForEach** to obtain data from arrays and create components for each data item.
```
ForEach(
arr: any[],
itemGenerator: (item: any, index?: number) => void,
keyGenerator?: (item: any, index?: number) => string
)
```
**Parameters**
| Name | Type | Mandatory| Description |
| ------------- | ------------------------------------- | ---- | ------------------------------------------------------------ |
| arr | any[] | Yes | An array, which can be empty, in which case no child component is created. The functions that return array-type values are also allowed, for example, **arr.slice (1, 3)**. The set functions cannot change any state variables including the array itself, such as **Array.splice**, **Array.sort**, and **Array.reverse**.|
| itemGenerator | (item: any, index?: number) => void | Yes | A lambda function used to generate one or more child components for each data item in an array. A single child component or a list of child components must be included in parentheses.|
| keyGenerator | (item: any, index?: number) => string | No | An anonymous function used to generate a unique and fixed key value for each data item in an array. This key value must remain unchanged for the data item even when the item is relocated in the array. When the item is replaced by a new item, the key value of the new item must be different from that of the existing item. This key-value generator is optional. However, for performance reasons, it is strongly recommended that the key-value generator be provided, so that the development framework can better identify array changes. For example, if no key-value generator is provided, a reverse of an array will result in rebuilding of all nodes in **ForEach**.|
> **NOTE**
>
> - **ForEach** must be used in container components.
>
> - The generated child components should be allowed in the parent container component of **ForEach**.
>
> - The **itemGenerator** function can contain an **if/else** statement, and an **if/else** statement can contain **ForEach**.
>
> - The call sequence of **itemGenerator** functions may be different from that of the data items in the array. During the development, do not assume whether or when the **itemGenerator** and **keyGenerator** functions are executed. The following is an example of incorrect usage:
>
> ```ts
> ForEach(anArray.map((item1, index1) => { return { i: index1 + 1, data: item1 }; }),
> item => Text(`${item.i}. item.data.label`),
> item => item.data.id.toString())
> ```
## Example
```ts
// xxx.ets
@Entry
@Component
struct MyComponent {
@State arr: number[] = [10, 20, 30]
build() {
Column({ space: 5 }) {
Button('Reverse Array')
.onClick(() => {
this.arr.reverse()
})
ForEach(this.arr, (item: number) => {
Text(`item value: ${item}`).fontSize(18)
Divider().strokeWidth(2)
}, (item: number) => item.toString())
}
}
}
```
![forEach1](figures/forEach1.gif)
## Lazy Loading
You can use **LazyForEach** to iterate over provided data sources and create corresponding components during each iteration.
```ts
LazyForEach(
dataSource: IDataSource,
itemGenerator: (item: any) => void,
keyGenerator?: (item: any) => string
): void
interface IDataSource {
totalCount(): number;
getData(index: number): any;
registerDataChangeListener(listener: DataChangeListener): void;
unregisterDataChangeListener(listener: DataChangeListener): void;
}
interface DataChangeListener {
onDataReloaded(): void;
onDataAdd(index: number): void;
onDataMove(from: number, to: number): void;
onDataDelete(index: number): void;
onDataChange(index: number): void;
}
```
**Parameters**
| Name | Type | Mandatory| Description |
| ------------- | --------------------- | ---- | ------------------------------------------------------------ |
| dataSource | IDataSource | Yes | Object used to implement the **IDataSource** API. You need to implement related APIs. |
| itemGenerator | (item: any) => void | Yes | A lambda function used to generate one or more child components for each data item in an array. A single child component or a list of child components must be included in parentheses.|
| keyGenerator | (item: any) => string | No | An anonymous function used to generate a unique and fixed key value for each data item in an array. This key value must remain unchanged for the data item even when the item is relocated in the array. When the item is replaced by a new item, the key value of the new item must be different from that of the existing item. This key-value generator is optional. However, for performance reasons, it is strongly recommended that the key-value generator be provided, so that the development framework can better identify array changes. For example, if no key-value generator is provided, a reverse of an array will result in rebuilding of all nodes in **LazyForEach**.|
### Description of IDataSource
| Name | Description |
| ------------------------------------------------------------ | ---------------------- |
| totalCount(): number | Obtains the total number of data records. |
| getData(index: number): any | Obtains the data corresponding to the specified index. |
| registerDataChangeListener(listener:DataChangeListener): void | Registers a listener for data changes.|
| unregisterDataChangeListener(listener:DataChangeListener): void | Deregisters a listener for data changes.|
### Description of DataChangeListener
| Name | Description |
| -------------------------------------------------------- | -------------------------------------- |
| onDataReloaded(): void | Invoked when all data is reloaded. |
| onDataAdded(index: number): void (deprecated) | Invoked when data is added to the position indicated by the specified index. |
| onDataMoved(from: number, to: number): void (deprecated) | Invoked when data is moved from the **from** position to the **to** position.|
| onDataDeleted(index: number): void (deprecated) | Invoked when data is deleted from the position indicated by the specified index. |
| onDataChanged(index: number): void (deprecated) | Invoked when data in the position indicated by the specified index is changed. |
| onDataAdd(index: number): void 8+ | Invoked when data is added to the position indicated by the specified index. |
| onDataMove(from: number, to: number): void 8+ | Invoked when data is moved from the **from** position to the **to** position.|
| onDataDelete(index: number): void 8+ | Invoked when data is deleted from the position indicated by the specified index. |
| onDataChange(index: number): void 8+ | Invoked when data in the position indicated by the specified index is changed. |
## Example
```ts
// xxx.ets
class BasicDataSource implements IDataSource {
private listeners: DataChangeListener[] = []
public totalCount(): number {
return 0
}
public getData(index: number): any {
return undefined
}
registerDataChangeListener(listener: DataChangeListener): void {
if (this.listeners.indexOf(listener) < 0) {
console.info('add listener')
this.listeners.push(listener)
}
}
unregisterDataChangeListener(listener: DataChangeListener): void {
const pos = this.listeners.indexOf(listener);
if (pos >= 0) {
console.info('remove listener')
this.listeners.splice(pos, 1)
}
}
notifyDataReload(): void {
this.listeners.forEach(listener => {
listener.onDataReloaded()
})
}
notifyDataAdd(index: number): void {
this.listeners.forEach(listener => {
listener.onDataAdd(index)
})
}
notifyDataChange(index: number): void {
this.listeners.forEach(listener => {
listener.onDataChange(index)
})
}
notifyDataDelete(index: number): void {
this.listeners.forEach(listener => {
listener.onDataDelete(index)
})
}
notifyDataMove(from: number, to: number): void {
this.listeners.forEach(listener => {
listener.onDataMove(from, to)
})
}
}
class MyDataSource extends BasicDataSource {
// Initialize the data list.
private dataArray: string[] = ['/path/image0.png', '/path/image1.png', '/path/image2.png', '/path/image3.png']
public totalCount(): number {
return this.dataArray.length
}
public getData(index: number): any {
return this.dataArray[index]
}
public addData(index: number, data: string): void {
this.dataArray.splice(index, 0, data)
this.notifyDataAdd(index)
}
public pushData(data: string): void {
this.dataArray.push(data)
this.notifyDataAdd(this.dataArray.length - 1)
}
}
@Entry
@Component
struct MyComponent {
private data: MyDataSource = new MyDataSource()
build() {
List({ space: 3 }) {
LazyForEach(this.data, (item: string) => {
ListItem() {
Row() {
Image(item).width(50).height(50)
Text(item).fontSize(20).margin({ left: 10 })
}.margin({ left: 10, right: 10 })
}
.onClick(() => {
// The count increases by one each time the list is clicked.
this.data.pushData('/path/image' + this.data.totalCount() + '.png')
})
}, item => item)
}
}
}
```
> **NOTE**
>
> - **LazyForEach** must be used in the container component. Currently, only the **\<List>**, **\<Grid>**, and **\<Swiper>** components support lazy loading (that is, only the visible part and a small amount of data before and after the visible part are loaded for caching). For other components, all data is loaded at a time.
>
> - **LazyForEach** must create and only one child component in each iteration.
>
> - The generated child components must be allowed in the parent container component of **LazyForEach**.
>
> - **LazyForEach** can be included in an **if/else** statement, but cannot contain such a statement.
>
> - For the purpose of high-performance rendering, when the **onDataChange** method of the **DataChangeListener** object is used to update the UI, the component update is triggered only when the state variable is used in the child component created by **itemGenerator**.
>
> - The call sequence of **itemGenerator** functions may be different from that of the data items in the data source. During the development, do not assume whether or when the **itemGenerator** and **keyGenerator** functions are executed. The following is an example of incorrect usage:
>
> ```ts
> LazyForEach(dataSource,
> item => Text(`${item.i}. item.data.label`)),
> item => item.data.id.toString())
> ```
![lazyForEach](figures/lazyForEach.gif)
# Resource Categories and Access
## Resource Categories
Resource files used during application development must be stored in specified directories for management.
### resources Directory
The **resources** directory consists of three types of sub-directories: the **base** sub-directory, qualifiers sub-directories, and the **rawfile** sub-directory. The common resource files used across projects in the stage model are stored in the **resources** directory under **AppScope**.
Example of the **resources** directory:
```
resources
|---base // Default directory
| |---element
| | |---string.json
| |---media
| | |---icon.png
|---en_GB-vertical-car-mdpi // Example of a qualifiers sub-directory, which needs to be created on your own
| |---element
| | |---string.json
| |---media
| | |---icon.png
|---rawfile
```
**Table 1** Classification of the resources directory
| Category | base Sub-directory | Qualifiers Sub-directory | rawfile Sub-directory |
| ---- | ---------------------------------------- | ---------------------------------------- | ---------------------------------------- |
| Structure| The **base** sub-directory is a default directory. If no qualifiers sub-directories in the **resources** directory of the application match the device status, the resource file in the **base** sub-directory will be automatically referenced.<br>Resource group sub-directories are located at the second level of sub-directories to store basic elements such as strings, colors, and boolean values, as well as resource files such as media, animations, and layouts. For details, see [Resource Group Sub-directories](#resource-group-sub-directories).| You need to create qualifiers sub-directories on your own. Each directory name consists of one or more qualifiers that represent the application scenarios or device characteristics. For details, see [Qualifiers Sub-directories](#qualifiers-sub-directories).<br>Resource group sub-directories are located at the second level of sub-directories to store basic elements such as strings, colors, and boolean values, as well as resource files such as media, animations, and layouts. For details, see [Resource Group Sub-directories](#resource-group-sub-directories). | You can create multiple levels of sub-directories with custom directory names. They can be used to store various resource files.<br>However, resource files in the **rawfile** sub-directory will not be matched based on the device status.|
| Compilation| Resource files in the sub-directory are compiled into binary files, and each resource file is assigned an ID. | Resource files in the sub-directory are compiled into binary files, and each resource file is assigned an ID. | Resource files in the sub-directory are directly packed into the application without being compiled, and no IDs will be assigned to the resource files. |
| Reference| Resource files in the sub-directory are referenced based on the resource type and resource name. | Resource files in the sub-directory are referenced based on the resource type and resource name. | Resource files in the sub-directory are referenced based on the file path and file name. |
### Qualifiers Sub-directories
The name of a qualifiers sub-directory consists of one or more qualifiers that represent the application scenarios or device characteristics, covering the mobile country code (MCC), mobile network code (MNC), language, script, country or region, screen orientation, device type, night mode, and screen density. The qualifiers are separated using underscores (\_) or hyphens (\-). Before creating a qualifiers sub-directory, familiarize yourself with the directory naming conventions and the rules for matching qualifiers sub-directories and the device status.
**Naming Conventions for Qualifiers Sub-directories**
- Qualifiers are ordered in the following sequence: **\_MCC_MNC-language_script_country/region-orientation-device-color mode-density**. You can select one or multiple qualifiers to name your sub-directory based on your application scenarios and device characteristics.
- Separation between qualifiers: The language, script, and country/region qualifiers are separated using underscores (\_); the MNC and MCC qualifiers are also separated using underscores (\_); other qualifiers are separated using hyphens (\-). For example, **zh_Hant_CN** and **zh_CN-car-ldpi**.
- Value range of qualifiers: The value of each qualifier must meet the requirements specified in the following table. Otherwise, the resource files in the resources directory cannot be matched.
**Table 2** Requirements for qualifier values
| Qualifier Type | Description and Value Range |
| ----------- | ---------------------------------------- |
| MCC&MNC| Indicates the MCC and MNC, which are obtained from the network where the device is registered. The MCC can be either followed by the MNC with an underscore (\_) in between or be used independently. For example, **mcc460** indicates China, and **mcc460\_mnc00** indicates China\_China Mobile.<br>For details about the value range, refer to **ITU-T E.212** (the international identification plan for public networks and subscriptions).|
| Language | Indicates the language used by the device. The value consists of two or three lowercase letters. For example, **zh** indicates Chinese, **en** indicates English, and **mai** indicates Maithili.<br>For details about the value range, refer to **ISO 639** (codes for the representation of names of languages).|
| Text | Indicates the script type used by the device. The value starts with one uppercase letter followed by three lowercase letters. For example, **Hans** indicates simplified Chinese, and **Hant** indicates traditional Chinese.<br>For details about the value range, refer to **ISO 15924** (codes for the representation of names of scripts).|
| Country/Region | Indicates the country or region where the user is located. The value consists of two or three uppercase letters or three digits. For example, **CN** indicates China, and **GB** indicates the United Kingdom.<br>For details about the value range, refer to **ISO 3166-1** (codes for the representation of names of countries and their subdivisions).|
| Screen orientation | Indicates the screen orientation of the device. The value can be:<br>- **vertical**: portrait orientation<br>- **horizontal**: landscape orientation|
| Device type | Indicates the device type. The value can be:<br>- **car**: head unit<br>- **tv**: smart TV<br>- **wearable**: smart wearable|
| Color mode | Indicates the color mode of the device. The value can be:<br>- **dark**: dark mode<br>- **light**: light mode|
| Screen density | Indicates the screen density of the device, in dpi. The value can be:<br>- **sdpi**: screen density with small-scale dots per inch (SDPI). This value is applicable for devices with a DPI range of (0, 120].<br>- **mdpi**: medium-scale screen density (Medium-scale Dots Per Inch), applicable to DPI whose value is (120, 160] device.<br>- **ldpi**: screen density with large-scale dots per inch (LDPI). This value is applicable for devices with a DPI range of (160, 240].<br>- **xldpi**: screen density with extra-large-scale dots per inch (XLDPI). This value is applicable for devices with a DPI range of (240, 320].<br>- **xxldpi**: screen density with extra-extra-large-scale dots per inch (XXLDPI). This value is applicable for devices with a DPI range of (320, 480].<br>- **xxxldpi**: screen density with extra-extra-extra-large-scale dots per inch (XXXLDPI). This value is applicable for devices with a DPI range of (480, 640].|
**Rules for Matching Qualifiers Sub-directories and Device Resources**
- Qualifiers are matched with the device resources in the following priorities: MCC&MNC > locale (options: language, language_script, language_country/region, and language_script_country/region) > screen orientation > device type > color mode > screen density.
- If the qualifiers sub-directories contain the **MCC, MNC, language, script, screen orientation, device type, and color mode** qualifiers, their values must be consistent with the current device status so that the sub-directories can be used for matching the device resources. For example, the qualifiers sub-directory **zh_CN-car-ldpi** cannot be used for matching the resource files labeled **en_US**.
### Resource Group Sub-directories
You can create resource group sub-directories (including element, media, and profile) in the **base** and qualifiers sub-directories to store resource files of specific types.
**Table 3** Resource group sub-directories
| Resource Group Sub-directory | Description | Resource File |
| ------- | ---------------------------------------- | ---------------------------------------- |
| element | Indicates element resources. Each type of data is represented by a JSON file. The options are as follows:<br>- **boolean**: boolean data<br>- **color**: color data<br>- **float**: floating-point data<br>- **intarray**: array of integers<br>- **integer**: integer data<br>- **pattern**: pattern data<br>- **plural**: plural form data<br>- **strarray**: array of strings<br>- **string**: string data| It is recommended that files in the **element** sub-directory be named the same as the following files, each of which can contain only data of the same type:<br>- boolean.json<br>- color.json<br>- float.json<br>- intarray.json<br>- integer.json<br>- pattern.json<br>- plural.json<br>- strarray.json<br>- string.json |
| media | Indicates media resources, including non-text files such as images, audios, and videos. | The file name can be customized, for example, **icon.png**. |
| rawfile | Indicates other types of files, which are stored in their raw formats after the application is built as an HAP file. They will not be integrated into the **resources.index** file.| The file name can be customized. |
**Media Resource Types**
**Table 4** Image resource types
| Format | File Name Extension|
| ---- | ----- |
| JPEG | .jpg |
| PNG | .png |
| GIF | .gif |
| SVG | .svg |
| WEBP | .webp |
| BMP | .bmp |
**Table 5** Audio and video resource types
| Format | File Name Extension |
| ------------------------------------ | --------------- |
| H.263 | .3gp <br>.mp4 |
| H.264 AVC <br> Baseline Profile (BP) | .3gp <br>.mp4 |
| MPEG-4 SP | .3gp |
| VP8 | .webm <br> .mkv |
**Resource File Examples**
The content of the **color.json** file is as follows:
```json
{
"color": [
{
"name": "color_hello",
"value": "#ffff0000"
},
{
"name": "color_world",
"value": "#ff0000ff"
}
]
}
```
The content of the **float.json** file is as follows:
```json
{
"float":[
{
"name":"font_hello",
"value":"28.0fp"
},
{
"name":"font_world",
"value":"20.0fp"
}
]
}
```
The content of the **string.json** file is as follows:
```json
{
"string":[
{
"name":"string_hello",
"value":"Hello"
},
{
"name":"string_world",
"value":"World"
},
{
"name":"message_arrive",
"value":"We will arrive at %s."
}
]
}
```
The content of the **plural.json** file is as follows:
```json
{
"plural":[
{
"name":"eat_apple",
"value":[
{
"quantity":"one",
"value":"%d apple"
},
{
"quantity":"other",
"value":"%d apples"
}
]
}
]
}
```
## Resource Access
### Application Resources
**Creating a Resource File**
You can create a sub-directory and its files under the **resources** directory based on the preceding descriptions of the qualifiers sub-directories and resource group sub-directories.
DevEco Studio provides a wizard for you to create resource directories and resource files.
- Creating a Resource Directory and Resource File
Right-click the **resources** directory and choose **New > Resource File**.
If no qualifier is selected, the file is created in a resource type sub-directory under **base**. If one or more qualifiers are selected, the system automatically generates a sub-directory and creates the file in this sub-directory.
The created sub-directory is automatically named in the format of **Qualifiers.Resource type**. For example, if you create a sub-directory by setting **Orientation** to **Vertical** and **Resource type** to **Graphic**, the system automatically generates a sub-directory named **vertical.graphic**.
![create-resource-file-1](figures/create-resource-file-1.png)
- Creating a Resource Directory
Right-click the **resources** directory and choose **New > Resource Directory**. This operation creates a sub-directory only.
Select a resource group type and set qualifiers. Then the system automatically generates the sub-directory name. The sub-directory is automatically named in the format of **Qualifiers.Resource group**. For example, if you create a sub-directory by setting **Orientation** to **Vertical** and **Resource type** to **Graphic**, the system automatically generates a sub-directory named **vertical.graphic**.
![create-resource-file-2](figures/create-resource-file-2.png)
- Creating a Resource File
Right-click a sub-directory under **resources** and choose **New > *XXX* Resource File**. This operation creates a resource file under this sub-directory.
For example, you can create an element resource file in the **element** sub-directory.
![create-resource-file-3](figures/create-resource-file-3.png)
**Accessing Application Resources**
To reference an application resource in a project, use the **"$r('app.type.name')"** format. **app** indicates the resource defined in the **resources** directory of the application. **type** indicates the resource type (or the location where the resource is stored). The value can be **color**, **float**, **string**, **plural**, or **media**. **name** indicates the resource name, which you set when defining the resource.
When referencing resources in the **rawfile** sub-directory, use the **"$rawfile('filename')"** format. Wherein, **filename** indicates the relative path of a file in the **rawfile** directory, which must contain the file name extension in the file name and cannot start with a slash (/).
> **NOTE**
>
> Resource descriptors accept only strings, such as **'app.type.name'**, and cannot be combined.
>
> The return value of **$r** is a **Resource** object. You can obtain the corresponding string by using the [getStringValue](../reference/apis/js-apis-resource-manager.md) API.
In the **.ets** file, you can use the resources defined in the **resources** directory.
```ts
Text($r('app.string.string_hello'))
.fontColor($r('app.color.color_hello'))
.fontSize($r('app.float.font_hello'))
}
Text($r('app.string.string_world'))
.fontColor($r('app.color.color_world'))
.fontSize($r('app.float.font_world'))
}
Text($r('app.string.message_arrive', "five of the clock")) // Reference string resources. The second parameter of $r is used to replace %s.
.fontColor($r('app.color.color_hello'))
.fontSize($r('app.float.font_hello'))
}
Text($r('app.plural.eat_apple', 5, 5)) // Reference plural resources. The first parameter indicates the plural resource, the second parameter indicates the number of plural resources, and the third parameter indicates the substitute of %d.
.fontColor($r('app.color.color_world'))
.fontSize($r('app.float.font_world'))
}
Image($r('app.media.my_background_image')) // Reference media resources.
Image($rawfile('test.png')) // Reference an image in the rawfile directory.
Image($rawfile('newDir/newTest.png')) // Reference an image in the rawfile directory.
```
### System Resources
System resources include colors, rounded corners, fonts, spacing, character strings, and images. By using system resources, you can develop different applications with the same visual style.
To reference a system resource, use the **"$r('sys.type.resource_id')"** format. Wherein: **sys** indicates a system resource; **type** indicates the resource type, which can be **color**, **float**, **string**, or **media**; **resource_id** indicates the resource ID.
```ts
Text('Hello')
.fontColor($r('sys.color.ohos_id_color_emphasize'))
.fontSize($r('sys.float.ohos_id_text_size_headline1'))
.fontFamily($r('sys.string.ohos_id_text_font_family_medium'))
.backgroundColor($r('sys.color.ohos_id_color_palette_aux1'))
Image($r('sys.media.ohos_app_icon'))
.border({color: $r('sys.color.ohos_id_color_palette_aux1'), radius: $r('sys.float.ohos_id_corner_radius_button'), width: 2})
.margin({top: $r('sys.float.ohos_id_elements_margin_horizontal_m'), bottom: $r('sys.float.ohos_id_elements_margin_horizontal_l')})
.height(200)
.width(300)
```
# Preparations # Before You Start
This document is intended for novices at developing OpenHarmony applications. It will introduce you to the OpenHarmony project directory structure and application development process, by walking you through a stripped-down, real-world example – building two pages and implementing redirection between them. The following figure shows how the pages look on the DevEco Studio Previewer. This document is intended for novices at developing OpenHarmony applications. It will introduce you to the OpenHarmony project directory structure and application development process, by walking you through a stripped-down, real-world example – building two pages and implementing redirection between them. The following figure shows how the pages look on the DevEco Studio Previewer.
...@@ -16,11 +16,11 @@ Before you begin, there are two basic concepts that will help you better underst ...@@ -16,11 +16,11 @@ Before you begin, there are two basic concepts that will help you better underst
OpenHarmony provides a UI development framework, known as ArkUI. ArkUI provides a full range of capabilities you may need for application UI development, ranging from components to layout calculation, animation, UI interaction, and drawing capabilities. OpenHarmony provides a UI development framework, known as ArkUI. ArkUI provides a full range of capabilities you may need for application UI development, ranging from components to layout calculation, animation, UI interaction, and drawing capabilities.
ArkUI comes with two development paradigms: JavaScript-based web-like development paradigm (web-like development paradigm for short) and TypeScript-based declarative development paradigm (declarative development paradigm for short). You can choose whichever development paradigm that aligns with your practice. ArkUI comes with two development paradigms: ArkTS-based declarative development paradigm (declarative development paradigm for short) and JavaScript-compatible web-like development paradigm (web-like development paradigm for short). You can choose whichever development paradigm that aligns with your practice.
| **Development Paradigm**| **Programming Language**| **UI Update Mode**| **Applicable To**| **Intended Audience**| | **Development Paradigm**| **Programming Language**| **UI Update Mode**| **Applicable To**| **Intended Audience**|
| -------- | -------- | -------- | -------- | -------- | | -------- | -------- | -------- | -------- | -------- |
| Declarative development paradigm| Extended TypeScript (eTS)| Data-driven| Applications involving technological sophistication and teamwork| Mobile application and system application developers| | Declarative development paradigm| ArkTS| Data-driven| Applications involving technological sophistication and teamwork| Mobile application and system application developers|
| Web-like development paradigm| JavaScript| Data-driven| Applications and service widgets with simple UIs| Frontend web developers| | Web-like development paradigm| JavaScript| Data-driven| Applications and service widgets with simple UIs| Frontend web developers|
For more details, see [UI Development](../ui/arkui-overview.md). For more details, see [UI Development](../ui/arkui-overview.md).
...@@ -36,7 +36,7 @@ The ability framework model has two forms: ...@@ -36,7 +36,7 @@ The ability framework model has two forms:
- **Stage model**: introduced since API version 9. For details, see [Stage Model Overview](../ability/stage-brief.md). - **Stage model**: introduced since API version 9. For details, see [Stage Model Overview](../ability/stage-brief.md).
The project directory structure of the FA model is different from that of the stage model. The stage model only works with the eTS programming language. The project directory structure of the FA model is different from that of the stage model. The stage model only works with the ArkTS programming language.
For details about the differences between the FA model and stage model, see [Ability Framework Overview](../ability/ability-brief.md). For details about the differences between the FA model and stage model, see [Ability Framework Overview](../ability/ability-brief.md).
...@@ -45,8 +45,8 @@ This document provides an ability with two pages. For more information about abi ...@@ -45,8 +45,8 @@ This document provides an ability with two pages. For more information about abi
## Tool Preparation ## Tool Preparation
1. Download the latest version of [DevEco Studio](https://developer.harmonyos.com/cn/develop/deveco-studio#download). 1. Download the latest version of [DevEco Studio](https://developer.harmonyos.com/cn/develop/deveco-studio).
2. Install DevEco Studio and configure the development environment. For details, see [Setting Up the Development Environment](https://developer.harmonyos.com/en/docs/documentation/doc-guides/ohos-setting-up-environment-0000001263160443). 2. Install DevEco Studio and configure the development environment. For details, see [Setting Up the Development Environment](https://developer.harmonyos.com/en/docs/documentation/doc-guides/ohos-setting-up-environment-0000001263160443).
When you are done, follow the instructions in [Getting Started with eTS in Stage Model](start-with-ets-stage.md), [Getting Started with eTS in FA Model](start-with-ets-fa.md), and [Getting Started with JavaScript in FA Model](start-with-js-fa.md). When you are done, follow the instructions in [Getting Started with ArkTS in Stage Model](start-with-ets-stage.md), [Getting Started with ArkTS in FA Model](start-with-ets-fa.md), and [Getting Started with JavaScript in FA Model](start-with-js-fa.md).
# Getting Started with eTS in FA Model # Getting Started with ArkTS in FA Model
> **NOTE** > **NOTE**
> >
> To use eTS, your DevEco Studio must be V3.0.0.601 Beta1 or later. > To use ArkTS, your DevEco Studio must be V3.0.0.601 Beta1 or later.
> >
> For best possible results, use [DevEco Studio V3.0.0.993](https://developer.harmonyos.com/cn/develop/deveco-studio#download) for your development. > For best possible results, use [DevEco Studio V3.0.0.993](https://developer.harmonyos.com/cn/develop/deveco-studio#download) for your development.
...@@ -37,7 +37,7 @@ ...@@ -37,7 +37,7 @@
- **src > main > ets > MainAbility > pages**: pages contained in **MainAbility**. - **src > main > ets > MainAbility > pages**: pages contained in **MainAbility**.
- **src > main > ets > MainAbility > pages > index.ets**: the first page in the **pages** list, also referred to as the entry to the application. - **src > main > ets > MainAbility > pages > index.ets**: the first page in the **pages** list, also referred to as the entry to the application.
- **src > main > ets > MainAbility > app.ets**: ability lifecycle file. - **src > main > ets > MainAbility > app.ets**: ability lifecycle file.
- **src > main > resources**: a collection of resource files used by your application/service, such as graphics, multimedia, character strings, and layout files. For details about resource files, see [Resource File Categories](../ui/ui-ts-basic-resource-file-categories.md). - **src > main > resources**: a collection of resource files used by your application/service, such as graphics, multimedia, character strings, and layout files. For details about resource files, see [Resource Categories and Access](resource-categories-and-access.md#resource-categories).
- **src > main > config.json**: module configuration file. This file describes the global configuration information of the application/service, the device-specific configuration information, and the configuration information of the HAP file. For details about the configuration file, see [Application Package Structure Configuration File (FA Model)](package-structure.md). - **src > main > config.json**: module configuration file. This file describes the global configuration information of the application/service, the device-specific configuration information, and the configuration information of the HAP file. For details about the configuration file, see [Application Package Structure Configuration File (FA Model)](package-structure.md).
- **build-profile.json5**: current module information and build configuration options, including **buildOption** and **targets**. - **build-profile.json5**: current module information and build configuration options, including **buildOption** and **targets**.
- **hvigorfile.js**: module-level compilation and build task script. You can customize related tasks and code implementation. - **hvigorfile.js**: module-level compilation and build task script. You can customize related tasks and code implementation.
...@@ -53,7 +53,6 @@ ...@@ -53,7 +53,6 @@
After the project synchronization is complete, choose **entry** > **src** > **main** > **ets** > **MainAbility** > **pages** in the **Project** window and open the **index.ets** file. You can see that the file contains a **\<Text>** component. The sample code in the **index.ets** file is shown below: After the project synchronization is complete, choose **entry** > **src** > **main** > **ets** > **MainAbility** > **pages** in the **Project** window and open the **index.ets** file. You can see that the file contains a **\<Text>** component. The sample code in the **index.ets** file is shown below:
```ts ```ts
// index.ets // index.ets
@Entry @Entry
...@@ -79,7 +78,6 @@ ...@@ -79,7 +78,6 @@
On the default page, add a **\<Button>** component to respond to user clicks and implement redirection to another page. The sample code in the **index.ets** file is shown below: On the default page, add a **\<Button>** component to respond to user clicks and implement redirection to another page. The sample code in the **index.ets** file is shown below:
```ts ```ts
// index.ets // index.ets
@Entry @Entry
...@@ -150,7 +148,6 @@ ...@@ -150,7 +148,6 @@
Add **\<Text>** and **\<Button>** components and set their styles, as you do for the first page. The sample code in the **second.ets** file is shown below: Add **\<Text>** and **\<Button>** components and set their styles, as you do for the first page. The sample code in the **second.ets** file is shown below:
```ts ```ts
// second.ets // second.ets
@Entry @Entry
...@@ -193,7 +190,6 @@ You can implement page redirection through the [page router](../reference/apis/j ...@@ -193,7 +190,6 @@ You can implement page redirection through the [page router](../reference/apis/j
In the **index.ets** file of the first page, bind the **onClick** event to the **Next** button so that clicking the button redirects the user to the second page. The sample code in the **index.ets** file is shown below: In the **index.ets** file of the first page, bind the **onClick** event to the **Next** button so that clicking the button redirects the user to the second page. The sample code in the **index.ets** file is shown below:
```ts ```ts
// index.ets // index.ets
import router from '@ohos.router'; import router from '@ohos.router';
...@@ -238,7 +234,6 @@ You can implement page redirection through the [page router](../reference/apis/j ...@@ -238,7 +234,6 @@ You can implement page redirection through the [page router](../reference/apis/j
In the **second.ets** file of the second page, bind the **onClick** event to the **Back** button so that clicking the button redirects the user back to the first page. The sample code in the **second.ets** file is shown below: In the **second.ets** file of the second page, bind the **onClick** event to the **Back** button so that clicking the button redirects the user back to the first page. The sample code in the **second.ets** file is shown below:
```ts ```ts
// second.ets // second.ets
import router from '@ohos.router'; import router from '@ohos.router';
...@@ -295,4 +290,4 @@ You can implement page redirection through the [page router](../reference/apis/j ...@@ -295,4 +290,4 @@ You can implement page redirection through the [page router](../reference/apis/j
![en-us_image_0000001363934577](figures/en-us_image_0000001363934577.png) ![en-us_image_0000001363934577](figures/en-us_image_0000001363934577.png)
Congratulations! You have finished developing your OpenHarmony application in eTS in the FA model. To learn more about OpenHarmony application development, see [Application Development Overview](../application-dev-guide.md). Congratulations! You have finished developing your OpenHarmony application in ArkTS in the FA model. To learn more about OpenHarmony application development, see [Application Development Overview](../application-dev-guide.md).
# Getting Started with eTS in Stage Model # Getting Started with ArkTS in Stage Model
> **NOTE** > **NOTE**
> >
> To use eTS, your DevEco Studio must be V3.0.0.900 Beta3 or later. > To use ArkTS, your DevEco Studio must be V3.0.0.900 Beta3 or later.
> >
> For best possible results, use [DevEco Studio V3.0.0.993](https://developer.harmonyos.com/cn/develop/deveco-studio#download) for your development. > For best possible results, use [DevEco Studio V3.0.0.993](https://developer.harmonyos.com/cn/develop/deveco-studio#download) for your development.
...@@ -41,7 +41,7 @@ ...@@ -41,7 +41,7 @@
- **src > main > ets > MainAbility**: entry to your application/service. - **src > main > ets > MainAbility**: entry to your application/service.
- **src > main > ets > MainAbility > MainAbility.ets**: ability lifecycle file. - **src > main > ets > MainAbility > MainAbility.ets**: ability lifecycle file.
- **src > main > ets > pages**: pages contained in **MainAbility**. - **src > main > ets > pages**: pages contained in **MainAbility**.
- **src > main > resources**: a collection of resource files used by your application/service, such as graphics, multimedia, character strings, and layout files. For details about resource files, see [Resource File Categories](../ui/ui-ts-basic-resource-file-categories.md). - **src > main > resources**: a collection of resource files used by your application/service, such as graphics, multimedia, character strings, and layout files. For details about resource files, see [Resource Categories and Access](resource-categories-and-access.md#resource-categories).
- **src > main > module.json5**: module configuration file. This file describes the global configuration information of the application/service, the device-specific configuration information, and the configuration information of the HAP file. For details about the configuration file, see [Application Package Structure Configuration File (Stage Model)](stage-structure.md). - **src > main > module.json5**: module configuration file. This file describes the global configuration information of the application/service, the device-specific configuration information, and the configuration information of the HAP file. For details about the configuration file, see [Application Package Structure Configuration File (Stage Model)](stage-structure.md).
- **build-profile.json5**: current module information and build configuration options, including **buildOption** and **targets**. - **build-profile.json5**: current module information and build configuration options, including **buildOption** and **targets**.
- **hvigorfile.js**: module-level compilation and build task script. You can customize related tasks and code implementation. - **hvigorfile.js**: module-level compilation and build task script. You can customize related tasks and code implementation.
...@@ -55,8 +55,7 @@ ...@@ -55,8 +55,7 @@
1. Use the **\<Text>** component. 1. Use the **\<Text>** component.
After the project synchronization is complete, choose **entry** > **src** > **main** > **ets** > **MainAbility** > **pages** in the **Project** window and open the **index.ets** file. You can see that the file contains a **\<Text>** component. The sample code in the **index.ets** file is shown below: After the project synchronization is complete, choose **entry** > **src** > **main** > **ets** > **pages** in the **Project** window and open the **index.ets** file. You can see that the file contains a **\<Text>** component. The sample code in the **index.ets** file is shown below:
```ts ```ts
// index.ets // index.ets
...@@ -83,7 +82,6 @@ ...@@ -83,7 +82,6 @@
On the default page, add a **\<Button>** component to respond to user clicks and implement redirection to another page. The sample code in the **index.ets** file is shown below: On the default page, add a **\<Button>** component to respond to user clicks and implement redirection to another page. The sample code in the **index.ets** file is shown below:
```ts ```ts
// index.ets // index.ets
@Entry @Entry
...@@ -148,7 +146,6 @@ ...@@ -148,7 +146,6 @@
Add **\<Text>** and **\<Button>** components and set their styles, as you do for the first page. The sample code in the **second.ets** file is shown below: Add **\<Text>** and **\<Button>** components and set their styles, as you do for the first page. The sample code in the **second.ets** file is shown below:
```ts ```ts
// second.ets // second.ets
@Entry @Entry
...@@ -191,7 +188,6 @@ You can implement page redirection through the [page router](../reference/apis/j ...@@ -191,7 +188,6 @@ You can implement page redirection through the [page router](../reference/apis/j
In the **index.ets** file of the first page, bind the **onClick** event to the **Next** button so that clicking the button redirects the user to the second page. The sample code in the **index.ets** file is shown below: In the **index.ets** file of the first page, bind the **onClick** event to the **Next** button so that clicking the button redirects the user to the second page. The sample code in the **index.ets** file is shown below:
```ts ```ts
// index.ets // index.ets
import router from '@ohos.router'; import router from '@ohos.router';
...@@ -236,7 +232,6 @@ You can implement page redirection through the [page router](../reference/apis/j ...@@ -236,7 +232,6 @@ You can implement page redirection through the [page router](../reference/apis/j
In the **second.ets** file of the second page, bind the **onClick** event to the **Back** button so that clicking the button redirects the user back to the first page. The sample code in the **second.ets** file is shown below: In the **second.ets** file of the second page, bind the **onClick** event to the **Back** button so that clicking the button redirects the user back to the first page. The sample code in the **second.ets** file is shown below:
```ts ```ts
// second.ets // second.ets
import router from '@ohos.router'; import router from '@ohos.router';
...@@ -293,4 +288,4 @@ You can implement page redirection through the [page router](../reference/apis/j ...@@ -293,4 +288,4 @@ You can implement page redirection through the [page router](../reference/apis/j
![en-us_image_0000001311334972](figures/en-us_image_0000001311334972.png) ![en-us_image_0000001311334972](figures/en-us_image_0000001311334972.png)
Congratulations! You have finished developing your OpenHarmony application in eTS in the stage model. To learn more about OpenHarmony application development, see [Application Development Overview](../application-dev-guide.md). Congratulations! You have finished developing your OpenHarmony application in ArkTS in the stage model. To learn more about OpenHarmony application development, see [Application Development Overview](../application-dev-guide.md).
...@@ -52,7 +52,6 @@ ...@@ -52,7 +52,6 @@
After the project synchronization is complete, choose **entry** > **src** > **main** > **js** > **MainAbility** > **pages** > **index** in the **Project** window and open the **index.hml** file. You can see that the file contains a **<Text>** component. The sample code in the **index.hml** file is shown below: After the project synchronization is complete, choose **entry** > **src** > **main** > **js** > **MainAbility** > **pages** > **index** in the **Project** window and open the **index.hml** file. You can see that the file contains a **<Text>** component. The sample code in the **index.hml** file is shown below:
```html ```html
<!-- index.hml --> <!-- index.hml -->
<div class="container"> <div class="container">
...@@ -66,7 +65,6 @@ ...@@ -66,7 +65,6 @@
On the default page, add a **\<Button>** component to respond to user clicks and implement redirection to another page. The sample code in the **index.hml** file is shown below: On the default page, add a **\<Button>** component to respond to user clicks and implement redirection to another page. The sample code in the **index.hml** file is shown below:
```html ```html
<!-- index.hml --> <!-- index.hml -->
<div class="container"> <div class="container">
...@@ -83,7 +81,6 @@ ...@@ -83,7 +81,6 @@
From the **Project** window, choose **entry** > **src** > **main** > **js** > **MainAbility** > **pages** > **index**, open the **index.css** file, and set the page styles, such as the width, height, font size, and spacing. The sample code in the **index.css** file is shown below: From the **Project** window, choose **entry** > **src** > **main** > **js** > **MainAbility** > **pages** > **index**, open the **index.css** file, and set the page styles, such as the width, height, font size, and spacing. The sample code in the **index.css** file is shown below:
```css ```css
/* index.css */ /* index.css */
.container { .container {
...@@ -132,7 +129,6 @@ ...@@ -132,7 +129,6 @@
Add **\<Text>** and **\<Button>** components and set their styles, as you do for the first page. The sample code in the **second.hml** file is shown below: Add **\<Text>** and **\<Button>** components and set their styles, as you do for the first page. The sample code in the **second.hml** file is shown below:
```html ```html
<!-- second.hml --> <!-- second.hml -->
<div class="container"> <div class="container">
...@@ -187,7 +183,6 @@ You can implement page redirection through the [page router](../reference/apis/j ...@@ -187,7 +183,6 @@ You can implement page redirection through the [page router](../reference/apis/j
In the **index.js** file of the first page, bind the **onclick** method to the button so that clicking the button redirects the user to the second page. The sample code in the **index.js** file is shown below: In the **index.js** file of the first page, bind the **onclick** method to the button so that clicking the button redirects the user to the second page. The sample code in the **index.js** file is shown below:
```js ```js
// index.js // index.js
import router from '@ohos.router'; import router from '@ohos.router';
...@@ -205,7 +200,6 @@ You can implement page redirection through the [page router](../reference/apis/j ...@@ -205,7 +200,6 @@ You can implement page redirection through the [page router](../reference/apis/j
In the **second.ets** file of the second page, bind the **back** method to the **Back** button so that clicking the button redirects the user back to the first page. The sample code in the **second.js** file is shown below: In the **second.ets** file of the second page, bind the **back** method to the **Back** button so that clicking the button redirects the user back to the first page. The sample code in the **second.js** file is shown below:
```js ```js
// second.js // second.js
import router from '@ohos.router'; import router from '@ohos.router';
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册