@@ -14,7 +14,7 @@ The audio interruption policy determines the operations (for example, pause, res
Two audio interruption modes, specified by [InterruptMode](../reference/apis/js-apis-audio.md#interruptmode9), are preset in the audio interruption policy:
-**SHARED_MODE**: Multiple audio streams created by an application share one audio focus. The concurrency rules between these audio streams are determined by the application, without the use of the audio interruption policy. However, if another application needs to play audio while one of these audio streams is being played, the audio interruption policy is triggered.
-**SHARE_MODE**: Multiple audio streams created by an application share one audio focus. The concurrency rules between these audio streams are determined by the application, without the use of the audio interruption policy. However, if another application needs to play audio while one of these audio streams is being played, the audio interruption policy is triggered.
-**INDEPENDENT_MODE**: Each audio stream created by an application has an independent audio focus. When multiple audio streams are played concurrently, the audio interruption policy is triggered.
@@ -8,7 +8,7 @@ OpenHarmony provides multiple classes for you to develop audio playback applicat
-[AudioRenderer](using-audiorenderer-for-playback.md): provides ArkTS and JS API to implement audio output. It supports only the PCM format and requires applications to continuously write audio data. The applications can perform data preprocessing, for example, setting the sampling rate and bit width of audio files, before audio input. This class can be used to develop more professional and diverse playback applications. To use this class, you must have basic audio processing knowledge.
-[OpenSLES](using-opensl-es-for-playback.md): provides a set of standard, cross-platform, yet unique native audio APIs. It supports audio output in PCM format and is applicable to playback applications that are ported from other embedded platforms or that implements audio output at the native layer.
-[OpenSLES](using-opensl-es-for-playback.md): provides a set of standard, cross-platform, yet unique native audio APIs. It supports audio output in PCM format and is applicable to playback applications that are ported from other embedded platforms or that implements audio output at the native layer.
-[TonePlayer](using-toneplayer-for-playback.md): provides ArkTS and JS API to implement the playback of dialing tones and ringback tones. It can be used to play the content selected from a fixed type range, without requiring the input of media assets or audio data. This class is application to specific scenarios where dialing tones and ringback tones are played. is available only to system applications.
@@ -8,7 +8,7 @@ OpenHarmony provides multiple classes for you to develop audio recording applica
-[AudioCapturer](using-audiocapturer-for-recording.md): provides ArkTS and JS API to implement audio input. It supports only the PCM format and requires applications to continuously read audio data. The application can perform data processing after audio output. This class can be used to develop more professional and diverse recording applications. To use this class, you must have basic audio processing knowledge.
-[OpenSLES](using-opensl-es-for-recording.md): provides a set of standard, cross-platform, yet unique native audio APIs. It supports audio input in PCM format and is applicable to recording applications that are ported from other embedded platforms or that implements audio input at the native layer.
-[OpenSLES](using-opensl-es-for-recording.md): provides a set of standard, cross-platform, yet unique native audio APIs. It supports audio input in PCM format and is applicable to recording applications that are ported from other embedded platforms or that implements audio input at the native layer.
## Precautions for Developing Audio Recording Applications
The multimedia subsystem provides the capability of processing users' visual and auditory information. For example, it can be used to collect, compress, store, decompress, and play audio and video information. Based on the type of media information to process, the media system is usually divided into four modules: audio, media, camera, and image.
The multimedia subsystem provides the capability of processing users' visual and auditory information. For example, it can be used to collect, compress, store, decompress, and play audio and video information. Based on the type of media information to process, the multimedia subsystem subsystem is usually divided into four modules: audio, media, camera, and image.
As shown in the figure below, the multimedia subsystem provides APIs for developing audio/video, camera, and gallery applications, and provides adaptation and acceleration for different hardware chips. In the middle part, it provides core media functionalities and management mechanisms in the form of services.
this.renderModel.on('stateChange',(state)=>{// Set the events to listen for. A callback is invoked when the AudioRenderer is switched to the specified state.
if(state==1){
console.info('audio renderer state is: STATE_PREPARED');
}
if(state==2){
console.info('audio renderer state is: STATE_RUNNING');
An audio and video application needs to access the AVSession service as a provider in order to display media information in the controller (for example, Media Controller) and respond to control commands delivered by the controller.
An audio and video application needs to access the AVSession service as a provider in order to display media information in the controller (for example, Media Controller) and respond to playback control commands delivered by the controller.
## Basic Concepts
...
...
@@ -14,22 +14,22 @@ The table below lists the key APIs used by the provider. The APIs use either a c
For details, see [AVSession Management](../reference/apis/js-apis-avsession.md).
| API| Description|
| API| Description|
| -------- | -------- |
| createAVSession(context: Context, tag: string, type: AVSessionType, callback: AsyncCallback<AVSession>): void | Creates an AVSession.<br>Only one AVSession can be created for a UIAbility.|
| setLaunchAbility(ability: WantAgent, callback: AsyncCallback<void>): void | Starts a UIAbility.|
| getController(callback: AsyncCallback<AVSessionController>): void | Obtains the controller of the AVSession.|
| activate(callback: AsyncCallback<void>): void | Activates the AVSession.|
| destroy(callback: AsyncCallback<void>): void | Destroys the AVSession.|
| createAVSession(context: Context, tag: string, type: AVSessionType, callback: AsyncCallback<AVSession>): void | Creates an AVSession.<br>Only one AVSession can be created for a UIAbility.|
// The player logic that triggers changes in the session metadata and playback state is omitted here.
// Set necessary session metadata.
...
...
@@ -80,13 +80,13 @@ To enable an audio and video application to access the AVSession service as a pr
3. Set the UIAbility to be started by the controller. The UIAbility configured here is started when a user operates the UI of the controller, for example, clicking a widget in Media Controller.
The UIAbility is set through the **WantAgent** API. For details, see [WantAgent](../reference/apis/js-apis-app-ability-wantAgent.md).
```ts
importWantAgentfrom"@ohos.app.ability.wantAgent";
```
```ts
// It is assumed that an AVSession object has been created. For details about how to create an AVSession object, see the node snippet above.
// It is assumed that an AVSession object has been created. For details about how to create an AVSession object, see the node snippet in step 1.
@@ -104,14 +104,14 @@ To enable an audio and video application to access the AVSession service as a pr
})
```
4. Listen for control commands delivered by the controller, for example, Media Controller.
4. Listen for playback control commands delivered by the controller, for example, Media Controller.
> **NOTE**
>
> After the provider registers a listener for the control command event, the event will be reflected in **getValidCommands()** of the controller. In other words, the controller determines that the command is valid and triggers the corresponding event as required. To ensure that the control commands delivered by the controller can be executed normally, the provider should not use a null implementation for listening.
> After the provider registers a listener for playback control commands, the commands will be reflected in **getValidCommands()** of the controller. In other words, the controller determines that the command is valid and triggers the corresponding event as required. To ensure that the playback control commands delivered by the controller can be executed normally, the provider should not use a null implementation for listening.
```ts
asyncsetListenerForMesFromController(){
// It is assumed that an AVSession object has been created. For details about how to create an AVSession object, see the node snippet above.
// It is assumed that an AVSession object has been created. For details about how to create an AVSession object, see the node snippet in step 1.