You can use audio playback APIs to convert audio data into audible analog signals, play the signals using output devices, and manage playback tasks.
You can use audio playback APIs to convert audio data into audible analog signals and play the signals using output devices. You can also manage playback tasks. For example, you can start, suspend, stop playback, release resources, set the volume, seek to a playback position, and obtain track information.
**Figure 1** Playback status
## Working Principles
The following figures show the audio playback status changes and the interaction with external modules for audio playback.
**Note**: If the status is **Idle**, setting the **src** attribute does not change the status. In addition, after the **src** attribute is set successfully, you must call **reset()** before setting it to another value.
**NOTE**: If the status is **Idle**, setting the **src** attribute does not change the status. In addition, after the **src** attribute is set successfully, you must call **reset()** before setting it to another value.
**Figure 2**Layer 0 diagram of audio playback
**Figure 2**Interaction with external modules for audio playback
**NOTE**: When a third-party application calls the JS interface provided by the JS interface layer to implement a feature, the framework layer invokes the audio component through the media service of the native framework and outputs the audio data decoded by the software to the audio HDI of the hardware interface layer to implement audio playback.
## How to Develop
For details about the APIs, see [AudioPlayer in the Media API](../reference/apis/js-apis-media.md#audioplayer).
> **NOTE**
>
> The method for obtaining the path in the FA model is different from that in the stage model. **pathDir** used in the sample code below is an example. You need to obtain the path based on project requirements. For details about how to obtain the path, see [Application Sandbox Path Guidelines](../reference/apis/js-apis-fileio.md#guidelines).
### Full-Process Scenario
The full audio playback process includes creating an instance, setting the URI, playing audio, seeking to the playback position, setting the volume, pausing playback, obtaining track information, stopping playback, resetting the player, and releasing resources.
...
...
@@ -99,8 +109,9 @@ async function audioPlayerDemo() {
setCallBack(audioPlayer);// Set the event callbacks.
// 2. Set the URI of the audio file.
letfdPath='fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command.
letpathDir="/data/storage/el2/base/haps/entry/files"// The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
letpath=pathDir+'/01.mp3'
awaitfileIO.open(path).then((fdNumber)=>{
fdPath=fdPath+''+fdNumber;
console.info('open fd success fd is'+fdPath);
...
...
@@ -118,6 +129,7 @@ async function audioPlayerDemo() {
```js
importmediafrom'@ohos.multimedia.media'
importfileIOfrom'@ohos.fileio'
exportclassAudioDemo{
// Set the player callbacks.
setCallBack(audioPlayer){
...
...
@@ -139,8 +151,9 @@ export class AudioDemo {
letaudioPlayer=media.createAudioPlayer();// Create an AudioPlayer instance.
this.setCallBack(audioPlayer);// Set the event callbacks.
letfdPath='fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command.
letpathDir="/data/storage/el2/base/haps/entry/files"// The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
letpath=pathDir+'/01.mp3'
awaitfileIO.open(path).then((fdNumber)=>{
fdPath=fdPath+''+fdNumber;
console.info('open fd success fd is'+fdPath);
...
...
@@ -159,6 +172,7 @@ export class AudioDemo {
```js
importmediafrom'@ohos.multimedia.media'
importfileIOfrom'@ohos.fileio'
exportclassAudioDemo{
// Set the player callbacks.
privateisNextMusic=false;
...
...
@@ -185,8 +199,9 @@ export class AudioDemo {
asyncnextMusic(audioPlayer){
this.isNextMusic=true;
letnextFdPath='fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\02.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command.
letpathDir="/data/storage/el2/base/haps/entry/files"// The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\02.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
letnextpath=pathDir+'/02.mp3'
awaitfileIO.open(nextpath).then((fdNumber)=>{
nextFdPath=nextFdPath+''+fdNumber;
console.info('open fd success fd is'+nextFdPath);
...
...
@@ -202,8 +217,9 @@ export class AudioDemo {
letaudioPlayer=media.createAudioPlayer();// Create an AudioPlayer instance.
this.setCallBack(audioPlayer);// Set the event callbacks.
letfdPath='fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command.
letpathDir="/data/storage/el2/base/haps/entry/files"// The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
letpath=pathDir+'/01.mp3'
awaitfileIO.open(path).then((fdNumber)=>{
fdPath=fdPath+''+fdNumber;
console.info('open fd success fd is'+fdPath);
...
...
@@ -222,6 +238,7 @@ export class AudioDemo {
```js
importmediafrom'@ohos.multimedia.media'
importfileIOfrom'@ohos.fileio'
exportclassAudioDemo{
// Set the player callbacks.
setCallBack(audioPlayer){
...
...
@@ -239,8 +256,9 @@ export class AudioDemo {
letaudioPlayer=media.createAudioPlayer();// Create an AudioPlayer instance.
this.setCallBack(audioPlayer);// Set the event callbacks.
letfdPath='fd://'
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el1/bundle/public/ohos.acts.multimedia.audio.audioplayer/ohos.acts.multimedia.audio.audioplayer/assets/entry/resources/rawfile" command.
letpathDir="/data/storage/el2/base/haps/entry/files"// The method for obtaining pathDir in the FA model is different from that in the stage model. For details, see NOTE just below How to Develop. You need to obtain pathDir based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
During audio recording, audio signals are captured, encoded, and saved to files. You can specify parameters such as the sampling rate, number of audio channels, encoding format, encapsulation format, and file path for audio recording.
During audio recording, audio signals are captured, encoded, and saved to files. You can specify parameters such as the sampling rate, number of audio channels, encoding format, encapsulation format, and output file path for audio recording.
## Working Principles
The following figures show the audio recording state transition and the interaction with external modules for audio recording.
**Figure 1** Audio recording state transition
...
...
@@ -10,10 +14,16 @@ During audio recording, audio signals are captured, encoded, and saved to files.
**Figure 2**Layer 0 diagram of audio recording
**Figure 2**Interaction with external modules for audio recording
**NOTE**: When a third-party recording application or recorder calls the JS interface provided by the JS interface layer to implement a feature, the framework layer invokes the audio component through the media service of the native framework to obtain the audio data captured through the audio HDI. The framework layer then encodes the audio data through software and saves the encoded and encapsulated audio data to a file to implement audio recording.
## Constraints
Before developing audio recording, configure the **ohos.permission.MICROPHONE** permission for your application. For details about the configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop
For details about the APIs, see [AudioRecorder in the Media API](../reference/apis/js-apis-media.md#audiorecorder).
You can use video playback APIs to convert video data into visible signals, play the signals using output devices, and manage playback tasks. This document describes development for the following video playback scenarios: full-process, normal playback, video switching, and loop playback.
You can use video playback APIs to convert audio data into audible analog signals and play the signals using output devices. You can also manage playback tasks. For example, you can start, suspend, stop playback, release resources, set the volume, seek to a playback position, set the playback speed, and obtain track information. This document describes development for the following video playback scenarios: full-process, normal playback, video switching, and loop playback.
## Working Principles
The following figures show the video playback state transition and the interaction with external modules for video playback.
**NOTE**: When a third-party application calls a JS interface provided by the JS interface layer, the framework layer invokes the audio component through the media service of the native framework to output the audio data decoded by the software to the audio HDI. The graphics subsystem outputs the image data decoded by the codec HDI at the hardware interface layer to the display HDI. In this way, video playback is implemented.
*Note: Video playback requires hardware capabilities such as display, audio, and codec.*
1. A third-party application obtains a surface ID from the XComponent.
During video recording, audio and video signals are captured, encoded, and saved to files. You can specify parameters such as the encoding format, encapsulation format, and file path for video recording.
You can use video recording APIs to capture audio and video signals, encode them, and save them to files. You can start, suspend, resume, and stop recording, and release resources. You can also specify parameters such as the encoding format, encapsulation format, and file path for video recording.
## Working Principles
The following figures show the video recording state transition and the interaction with external modules for video recording.
**NOTE**: When a third-party camera application or system camera calls a JS interface provided by the JS interface layer, the framework layer uses the media service of the native framework to invoke the audio component. Through the audio HDI, the audio component captures audio data, encodes the audio data through software, and saves the encoded audio data to a file. The graphics subsystem captures image data through the video HDI, encodes the image data through the video codec HDI, and saves the encoded image data to a file. In this way, video recording is implemented.
## Constraints
Before developing video recording, configure the permissions **ohos.permission.MICROPHONE** and **ohos.permission.CAMERA** for your application. For details about the configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop
For details about the APIs, see [VideoRecorder in the Media API](../reference/apis/js-apis-media.md#videorecorder9).
...
...
@@ -147,3 +157,4 @@ export class VideoRecorderDemo {
> The initial APIs of this module are supported since API version 6. Newly added APIs will be marked with a superscript to indicate their earliest API version.
The multimedia subsystem provides a set of simple and easy-to-use APIs for you to access the system and use media resources.
...
...
@@ -53,7 +52,7 @@ Creates a **VideoPlayer** instance. This API uses an asynchronous callback to re
| callback | AsyncCallback<[VideoPlayer](#videoplayer8)> | Yes | Callback used to return the **VideoPlayer** instance, which can be used to manage and play video media.|
| callback | AsyncCallback<[VideoPlayer](#videoplayer8)> | Yes | Callback used to return the result. If the operation is successful, the **VideoPlayer** instance is returned; otherwise, **null** is returned. The instance can be used to manage and play video media.|
**Example**
...
...
@@ -80,9 +79,9 @@ Creates a **VideoPlayer** instance. This API uses a promise to return the result
| Promise<[VideoPlayer](#videoplayer8)> | Promise used to return the result. If the operation is successful, the **VideoPlayer** instance is returned; otherwise, **null** is returned. The instance can be used to manage and play video media.|
**Example**
...
...
@@ -112,9 +111,9 @@ Only one **AudioRecorder** instance can be created per device.
| [AudioRecorder](#audiorecorder) | Returns the **AudioRecorder** instance if the operation is successful; returns **null** otherwise. The instance can be used to record audio media.|
**Example**
...
...
@@ -135,7 +134,7 @@ Only one **AudioRecorder** instance can be created per device.
| callback | AsyncCallback<[VideoRecorder](#videorecorder9)> | Yes | Callback used to return the **VideoRecorder** instance, which can be used to record video media.|
| callback | AsyncCallback<[VideoRecorder](#videorecorder9)> | Yes | Callback used to return the result. If the operation is successful, the **VideoRecorder** instance is returned; otherwise, **null** is returned. The instance can be used to record video media.|
**Example**
...
...
@@ -163,9 +162,9 @@ Only one **AudioRecorder** instance can be created per device.
| Promise<[VideoRecorder](#videorecorder9)> | Promise used to return the result. If the operation is successful, the **VideoRecorder** instance is returned; otherwise, **null** is returned. The instance can be used to record video media.|
**Example**
...
...
@@ -263,7 +262,7 @@ Enumerates the buffering event types.
| BUFFERING_START | 1 | Buffering starts. |
| BUFFERING_END | 2 | Buffering ends. |
| BUFFERING_PERCENT | 3 | Buffering progress, in percent. |
| CACHED_DURATION | 4 | Cache duration, in milliseconds.|
| CACHED_DURATION | 4 | Cache duration, in ms.|
## AudioPlayer
...
...
@@ -362,9 +361,9 @@ Seeks to the specified playback position.
| callback | AsyncCallback<Array<[MediaDescription](#mediadescription8)>> | Yes | Callback used to return a **MediaDescription** array, which records the audio track information.|
**Example**
...
...
@@ -463,9 +462,9 @@ Obtains the audio track information. This API uses a promise to return the resul
| Promise<Array<[MediaDescription](#mediadescription8)>> | Promise used to return a **MediaDescription** array, which records the audio track information.|
**Example**
...
...
@@ -497,7 +496,7 @@ for (let i = 0; i < arrayDescription.length; i++) {
| callback | AsyncCallback<Array<[MediaDescription](#mediadescription8)>> | Yes | Callback used to return a **MediaDescription** array, which records the video track information.|
**Example**
...
...
@@ -1256,9 +1255,9 @@ Obtains the video track information. This API uses a promise to return the resul
| Promise<Array<[MediaDescription](#mediadescription8)>> | Promise used to return a **MediaDescription** array, which records the video track information.|
**Example**
...
...
@@ -1332,9 +1331,9 @@ Sets the video playback speed. This API uses a promise to return the result.
**Return value**
| Type | Description |
| ---------------- | ------------------------- |
| Promise\<number> | Promise used to return the result.|