提交 8142f06d 编写于 作者: G Gloria

Update media folder

Signed-off-by: wusongqing<wusongqing@huawei.com>
上级 4f992784
# Media
- [Media Application Overview](media-application-overview.md)
- Audio and Video
- [Audio Overview](audio-overview.md)
- [Audio Rendering Development](audio-renderer.md)
- [Audio Stream Management Development](audio-stream-manager.md)
- [Audio Capture Development](audio-capturer.md)
- [OpenSL ES Audio Playback Development](opensles-playback.md)
- [OpenSL ES Audio Recording Development](opensles-capture.md)
- [Audio Interruption Mode Development](audio-interruptmode.md)
- [Volume Management Development](audio-volume-manager.md)
- [Audio Routing and Device Management Development](audio-routing-manager.md)
- [AVPlayer Development (Recommended)](avplayer-playback.md)
- [AVRecorder Development (Recommended)](avrecorder.md)
- [Audio Playback Development (To Be Deprecated Soon)](audio-playback.md)
- [Audio Recording Development (To Be Deprecated Soon)](audio-recorder.md)
- [Video Playback Development (To Be Deprecated Soon)](video-playback.md)
- [Video Recording Development (To Be Deprecated Soon)](video-recorder.md)
- AVSession
- [Audio and Video Overview](av-overview.md)
- [AVPlayer and AVRecorder](avplayer-avrecorder-overview.md)
- Audio Playback
- [Audio Playback Overview](audio-playback-overview.md)
- [Using AVPlayer for Audio Playback](using-avplayer-for-playback.md)
- [Using AudioRenderer for Audio Playback](using-audiorenderer-for-playback.md)
- [Using OpenSL ES for Audio Playback](using-opensl-es-for-playback.md)
- [Using TonePlayer for Audio Playback (for System Applications Only)](using-toneplayer-for-playback.md)
- [Audio Playback Concurrency Policy](audio-playback-concurrency.md)
- [Volume Management](volume-management.md)
- [Audio Playback Stream Management](audio-playback-stream-management.md)
- [Audio Output Device Management](audio-output-device-management.md)
- [Distributed Audio Playback (for System Applications Only)](distributed-audio-playback.md)
- Audio Recording
- [Audio Recording Overview](audio-recording-overview.md)
- [Using AVRecorder for Audio Recording](using-avrecorder-for-recording.md)
- [Using AudioCapturer for Audio Recording](using-audiocapturer-for-recording.md)
- [Using OpenSL ES for Audio Recording](using-opensl-es-for-recording.md)
- [Microphone Management](mic-management.md)
- [Audio Recording Stream Management](audio-recording-stream-management.md)
- [Audio Input Device Management](audio-input-device-management.md)
- Audio Call
- [Audio Call Overview](audio-call-overview.md)
- [Developing Audio Call](audio-call-development.md)
- [Video Playback](video-playback.md)
- [Video Recording](video-recording.md)
- AVSession (for System Applications Only)
- [AVSession Overview](avsession-overview.md)
- [AVSession Development](avsession-guidelines.md)
- Local AVSession
- [Local AVSession Overview](local-avsession-overview.md)
- [AVSession Provider](using-avsession-developer.md)
- [AVSession Controller](using-avsession-controller.md)
- Distributed AVSession
- [Distributed AVSession Overview](distributed-avsession-overview.md)
- [Using Distributed AVSession](using-distributed-avsession.md)
- Camera (for System Applications Only)
- [Camera Overview](camera-overview.md)
- Camera Development
- [Camera Development Preparations](camera-preparation.md)
- [Device Input Management](camera-device-input.md)
- [Session Management](camera-session-management.md)
- [Camera Preview](camera-preview.md)
- [Camera Photographing](camera-shooting.md)
- [Video Recording](camera-recording.md)
- [Camera Metadata](camera-metadata.md)
- Best Practices
- [Camera Photographing Sample](camera-shooting-case.md)
- [Video Recording Sample](camera-recording-case.md)
- Image
- [Image Development](image.md)
- Camera
- [Camera Development](camera.md)
- [Distributed Camera Development](remote-camera.md)
- [Image Overview](image-overview.md)
- [Image Decoding](image-decoding.md)
- Image Processing
- [Image Transformation](image-transformation.md)
- [Pixel Map Operation](image-pixelmap-operation.md)
- [Image Encoding](image-encoding.md)
- [Image Tool](image-tool.md)
# Developing Audio Call
During an audio call, audio output (playing the peer voice) and audio input (recording the local voice) are carried out simultaneously. You can use the AudioRenderer to implement audio output and the AudioCapturer to implement audio input.
Before starting or stopping using the audio call service, the application needs to check the [audio scene](audio-call-overview.md#audio-scene) and [ringer mode](audio-call-overview.md#ringer-mode) to adopt proper audio management and prompt policies.
The sample code below demonstrates the basic process of using the AudioRenderer and AudioCapturer to implement the audio call service, without the process of call data transmission. In actual development, the peer call data transmitted over the network needs to be decoded and played, and the sample code uses the process of reading an audio file instead; the local call data needs to be encoded and packed and then sent to the peer over the network, and the sample code uses the process of writing an audio file instead.
## Using AudioRenderer to Play the Peer Voice
This process is similar to the process of [using AudioRenderer to develop audio playback](using-audiorenderer-for-playback.md). The key differences lie in the **audioRendererInfo** parameter and audio data source. In the **audioRendererInfo** parameter used for audio calling, **content** must be set to **CONTENT_TYPE_SPEECH**, and **usage** must be set to **STREAM_USAGE_VOICE_COMMUNICATION**.
```ts
import audio from '@ohos.multimedia.audio';
import fs from '@ohos.file.fs';
const TAG = 'VoiceCallDemoForAudioRenderer';
// The process is similar to the process of using AudioRenderer to develop audio playback. The key differences lie in the audioRendererInfo parameter and audio data source.
export default class VoiceCallDemoForAudioRenderer {
private renderModel = undefined;
private audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_48000, // Sampling rate.
channels: audio.AudioChannel.CHANNEL_2, // Channel.
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, // Sampling format.
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW // Encoding format.
}
private audioRendererInfo = {
// Parameters corresponding to the call scenario need to be used.
content: audio.ContentType.CONTENT_TYPE_SPEECH, // Audio content type: speech.
usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION, // Audio stream usage type: voice communication.
rendererFlags: 0 // AudioRenderer flag. The default value is 0.
}
private audioRendererOptions = {
streamInfo: this.audioStreamInfo,
rendererInfo: this.audioRendererInfo
}
// Create an AudioRenderer instance, and set the events to listen for.
init() {
audio.createAudioRenderer(this.audioRendererOptions, (err, renderer) => { // Create an AudioRenderer instance.
if (!err) {
console.info(`${TAG}: creating AudioRenderer success`);
this.renderModel = renderer;
this.renderModel.on('stateChange', (state) => { // Set the events to listen for. A callback is invoked when the AudioRenderer is switched to the specified state.
if (state == 1) {
console.info('audio renderer state is: STATE_PREPARED');
}
if (state == 2) {
console.info('audio renderer state is: STATE_RUNNING');
}
});
this.renderModel.on('markReach', 1000, (position) => { // Subscribe to the markReach event. A callback is triggered when the number of rendered frames reaches 1000.
if (position == 1000) {
console.info('ON Triggered successfully');
}
});
} else {
console.info(`${TAG}: creating AudioRenderer failed, error: ${err.message}`);
}
});
}
// Start audio rendering.
async start() {
let stateGroup = [audio.AudioState.STATE_PREPARED, audio.AudioState.STATE_PAUSED, audio.AudioState.STATE_STOPPED];
if (stateGroup.indexOf(this.renderModel.state) === -1) { // Rendering can be started only when the AudioRenderer is in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state.
console.error(TAG + 'start failed');
return;
}
await this.renderModel.start(); // Start rendering.
const bufferSize = await this.renderModel.getBufferSize();
// The process of reading audio file data is used as an example. In actual audio call development, audio data transmitted from the peer needs to be read.
let context = getContext(this);
let path = context.filesDir;
const filePath = path + '/voice_call_data.wav'; // Sandbox path. The actual path is /data/storage/el2/base/haps/entry/files/voice_call_data.wav.
let file = fs.openSync(filePath, fs.OpenMode.READ_ONLY);
let stat = await fs.stat(filePath);
let buf = new ArrayBuffer(bufferSize);
let len = stat.size % bufferSize === 0 ? Math.floor(stat.size / bufferSize) : Math.floor(stat.size / bufferSize + 1);
for (let i = 0; i < len; i++) {
let options = {
offset: i * bufferSize,
length: bufferSize
};
let readsize = await fs.read(file.fd, buf, options);
// buf indicates the audio data to be written to the buffer. Before calling AudioRenderer.write(), you can preprocess the audio data for personalized playback. The AudioRenderer reads the audio data written to the buffer for rendering.
let writeSize = await new Promise((resolve, reject) => {
this.renderModel.write(buf, (err, writeSize) => {
if (err) {
reject(err);
} else {
resolve(writeSize);
}
});
});
if (this.renderModel.state === audio.AudioState.STATE_RELEASED) { // The rendering stops if the AudioRenderer is in the STATE_RELEASED state.
fs.close(file);
await this.renderModel.stop();
}
if (this.renderModel.state === audio.AudioState.STATE_RUNNING) {
if (i === len - 1) { // The rendering stops if the file finishes reading.
fs.close(file);
await this.renderModel.stop();
}
}
}
}
// Pause the rendering.
async pause() {
// Rendering can be paused only when the AudioRenderer is in the STATE_RUNNING state.
if (this.renderModel.state !== audio.AudioState.STATE_RUNNING) {
console.info('Renderer is not running');
return;
}
await this.renderModel.pause(); // Pause rendering.
if (this.renderModel.state === audio.AudioState.STATE_PAUSED) {
console.info('Renderer is paused.');
} else {
console.error('Pausing renderer failed.');
}
}
// Stop rendering.
async stop() {
// Rendering can be stopped only when the AudioRenderer is in the STATE_RUNNING or STATE_PAUSED state.
if (this.renderModel.state !== audio.AudioState.STATE_RUNNING && this.renderModel.state !== audio.AudioState.STATE_PAUSED) {
console.info('Renderer is not running or paused.');
return;
}
await this.renderModel.stop(); // Stop rendering.
if (this.renderModel.state === audio.AudioState.STATE_STOPPED) {
console.info('Renderer stopped.');
} else {
console.error('Stopping renderer failed.');
}
}
// Release the instance.
async release() {
// The AudioRenderer can be released only when it is not in the STATE_RELEASED state.
if (this.renderModel.state === audio.AudioState.STATE_RELEASED) {
console.info('Renderer already released');
return;
}
await this.renderModel.release(); // Release the instance.
if (this.renderModel.state === audio.AudioState.STATE_RELEASED) {
console.info('Renderer released');
} else {
console.error('Renderer release failed.');
}
}
}
```
## Using AudioCapturer to Record the Local Voice
This process is similar to the process of [using AudioCapturer to develop audio recording](using-audiocapturer-for-recording.md). The key differences lie in the **audioCapturerInfo** parameter and audio data stream direction. In the **audioCapturerInfo** parameter used for audio calling, **source** must be set to **SOURCE_TYPE_VOICE_COMMUNICATION**.
```ts
import audio from '@ohos.multimedia.audio';
import fs from '@ohos.file.fs';
const TAG = 'VoiceCallDemoForAudioCapturer';
// The process is similar to the process of using AudioCapturer to develop audio recording. The key differences lie in the audioCapturerInfo parameter and audio data stream direction.
export default class VoiceCallDemoForAudioCapturer {
private audioCapturer = undefined;
private audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100, // Sampling rate.
channels: audio.AudioChannel.CHANNEL_1, // Channel.
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, // Sampling format.
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW // Encoding format.
}
private audioCapturerInfo = {
// Parameters corresponding to the call scenario need to be used.
source: audio.SourceType.SOURCE_TYPE_VOICE_COMMUNICATION, // Audio source type: voice communication.
capturerFlags: 0 // AudioCapturer flag. The default value is 0.
}
private audioCapturerOptions = {
streamInfo: this.audioStreamInfo,
capturerInfo: this.audioCapturerInfo
}
// Create an AudioCapturer instance, and set the events to listen for.
init() {
audio.createAudioCapturer(this.audioCapturerOptions, (err, capturer) => { // Create an AudioCapturer instance.
if (err) {
console.error(`Invoke createAudioCapturer failed, code is ${err.code}, message is ${err.message}`);
return;
}
console.info(`${TAG}: create AudioCapturer success`);
this.audioCapturer = capturer;
this.audioCapturer.on('markReach', 1000, (position) => { // Subscribe to the markReach event. A callback is triggered when the number of captured frames reaches 1000.
if (position === 1000) {
console.info('ON Triggered successfully');
}
});
this.audioCapturer.on('periodReach', 2000, (position) => { // Subscribe to the periodReach event. A callback is triggered when the number of captured frames reaches 2000.
if (position === 2000) {
console.info('ON Triggered successfully');
}
});
});
}
// Start audio recording.
async start() {
let stateGroup = [audio.AudioState.STATE_PREPARED, audio.AudioState.STATE_PAUSED, audio.AudioState.STATE_STOPPED];
if (stateGroup.indexOf(this.audioCapturer.state) === -1) { // Recording can be started only when the AudioRenderer is in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state.
console.error(`${TAG}: start failed`);
return;
}
await this.audioCapturer.start(); // Start recording.
// The following describes how to write audio data to a file. In actual audio call development, the local audio data needs to be encoded and packed, and then sent to the peer through the network.
let context = getContext(this);
const path = context.filesDir + '/voice_call_data.wav'; // Path for storing the recorded audio file.
let file = fs.openSync(path, 0o2 | 0o100); // Create the file if it does not exist.
let fd = file.fd;
let numBuffersToCapture = 150; // Write data for 150 times.
let count = 0;
while (numBuffersToCapture) {
let bufferSize = await this.audioCapturer.getBufferSize();
let buffer = await this.audioCapturer.read(bufferSize, true);
let options = {
offset: count * bufferSize,
length: bufferSize
};
if (buffer === undefined) {
console.error(`${TAG}: read buffer failed`);
} else {
let number = fs.writeSync(fd, buffer, options);
console.info(`${TAG}: write date: ${number}`);
}
numBuffersToCapture--;
count++;
}
}
// Stop recording.
async stop() {
// The AudioCapturer can be stopped only when it is in STATE_RUNNING or STATE_PAUSED state.
if (this.audioCapturer.state !== audio.AudioState.STATE_RUNNING && this.audioCapturer.state !== audio.AudioState.STATE_PAUSED) {
console.info('Capturer is not running or paused');
return;
}
await this.audioCapturer.stop(); // Stop recording.
if (this.audioCapturer.state === audio.AudioState.STATE_STOPPED) {
console.info('Capturer stopped');
} else {
console.error('Capturer stop failed');
}
}
// Release the instance.
async release() {
// The AudioCapturer can be released only when it is not in the STATE_RELEASED or STATE_NEW state.
if (this.audioCapturer.state === audio.AudioState.STATE_RELEASED || this.audioCapturer.state === audio.AudioState.STATE_NEW) {
console.info('Capturer already released');
return;
}
await this.audioCapturer.release(); // Release the instance.
if (this.audioCapturer.state == audio.AudioState.STATE_RELEASED) {
console.info('Capturer released');
} else {
console.error('Capturer release failed');
}
}
}
```
# Audio Call Development
Typically, audio calls are classified into VoIP calls and cellular calls.
- Voice over Internet Protocol (VoIP) is a technology that enables you to make voice calls using a broadband Internet connection. During a VoIP call, call information is packed into data packets and transmitted over the network. Therefore, the VoIP call has high requirements on the network quality, and the call quality is closely related to the network connection speed.
- Cellular call refers to the traditional telephony service provided by carriers. Currently, APIs for developing cellular calling are available only for system applications.
When developing the audio call service, you must use a proper audio processing policy based on the [audio scene](#audio-scene) and [ringer mode](#ringer-mode).
## Audio Scene
When an application uses the audio call service, the system switches to the call-related audio scene (specified by [AudioScene](../reference/apis/js-apis-audio.md#audioscene8)). The system has preset multiple audio scenes, including ringing, cellular call, and voice chat, and uses a scene-specific policy to process audio.
For example, in the cellular call audio scene, the system prioritizes voice clarity. To deliver a crystal clear voice during calls, the system uses the 3A algorithm to preprocess audio data, suppress echoes, eliminates background noise, and adjusts the volume range. The 3A algorithm refers to three audio processing algorithms: Acoustic Echo Cancellation (AEC), Active Noise Control (ANC), and Automatic Gain Control (AGC).
Currently, the following audio scenes are preset:
- **AUDIO_SCENE_DEFAULT**: default audio scene, which can be used in all scenarios except audio calls.
- **AUDIO_SCENE_RINGING**: ringing audio scene, which is used when a call is coming and is open only to system applications.
- **AUDIO_SCENE_PHONE_CALL**: cellular call audio scene, which is used for cellular calls and is open only to system applications.
- **AUDIO_SCENE_VOICE_CHAT**: voice chat scene, which is used for VoIP calls.
The application can call **getAudioScene** in the [AudioManager](../reference/apis/js-apis-audio.md#audiomanager) class to obtain the audio scene in use. Before starting or stopping using the audio call service, the application can call this API to check whether the system has switched to the suitable audio scene.
## Ringer Mode
When an audio call is coming, the application notifies the user by playing a ringtone or vibrating, depending on the setting of [AudioRingMode](../reference/apis/js-apis-audio.md#audioringmode).
The system has preset the following ringer modes:
- **RINGER_MODE_SILENT**: silent mode, in which no sound is played when a call is coming in.
- **RINGER_MODE_VIBRATE**: vibration mode, in which no sound is played but the device vibrates when a call is coming in.
- **RINGER_MODE_NORMAL**: normal mode, in which a ringtone is played when a call is coming in.
The application can call **getRingerMode** in the [AudioVolumeGroupManager](../reference/apis/js-apis-audio.md#audiovolumegroupmanager9) class to obtain the ringer mode in use so as to use a proper policy to notify the user.
If the application wants to obtain the ringer mode changes in time, it can call **on('ringerModeChange')** in the **AudioVolumeGroupManager** class to listen for the changes. When the ringer mode changes, it will receive a notification and can make adjustment accordingly.
## Audio Device Switching During a Call
When a call is coming, the system selects an appropriate audio device based on the default priority. The application can switch the call to another audio device as required.
The audio devices that can be used for the audio call are specified by [CommunicationDeviceType](../reference/apis/js-apis-audio.md#communicationdevicetype9). The application can call **isCommunicationDeviceActive** in the [AudioRoutingManager](../reference/apis/js-apis-audio.md#audioroutingmanager9) class to check whether a communication device is active. It can also call **setCommunicationDevice** in the **AudioRoutingManager** class to set a communication device to the active state so that the device can be used for the call.
# Audio Capture Development
## Introduction
You can use the APIs provided by **AudioCapturer** to record raw audio files, thereby implementing audio data collection.
**Status check**: During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioCapturer** instance. This is because some operations can be performed only when the audio capturer is in a given state. If the application performs an operation when the audio capturer is not in the given state, the system may throw an exception or generate other undefined behavior.
## Working Principles
This following figure shows the audio capturer state transitions.
**Figure 1** Audio capturer state transitions
![audio-capturer-state](figures/audio-capturer-state.png)
- **PREPARED**: The audio capturer enters this state by calling **create()**.
- **RUNNING**: The audio capturer enters this state by calling **start()** when it is in the **PREPARED** state or by calling **start()** when it is in the **STOPPED** state.
- **STOPPED**: The audio capturer in the **RUNNING** state can call **stop()** to stop playing audio data.
- **RELEASED**: The audio capturer in the **PREPARED** or **STOPPED** state can use **release()** to release all occupied hardware and software resources. It will not transit to any other state after it enters the **RELEASED** state.
## Constraints
Before developing the audio data collection feature, configure the **ohos.permission.MICROPHONE** permission for your application. For details, see [Permission Application Guide](../security/accesstoken-guidelines.md#declaring-permissions-in-the-configuration-file).
## How to Develop
For details about the APIs, see [AudioCapturer in Audio Management](../reference/apis/js-apis-audio.md#audiocapturer8).
1. Use **createAudioCapturer()** to create a global **AudioCapturer** instance.
Set parameters of the **AudioCapturer** instance in **audioCapturerOptions**. This instance is used to capture audio, control and obtain the recording state, and register a callback for notification.
```js
import audio from '@ohos.multimedia.audio';
import fs from '@ohos.file.fs'; // It will be used for the call of the read function in step 3.
// Perform a self-test on APIs related to audio rendering.
@Entry
@Component
struct AudioRenderer {
@State message: string = 'Hello World'
private audioCapturer: audio.AudioCapturer; // It will be called globally.
async initAudioCapturer(){
let audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
}
let audioCapturerInfo = {
source: audio.SourceType.SOURCE_TYPE_MIC,
capturerFlags: 0 // 0 is the extended flag bit of the audio capturer. The default value is 0.
}
let audioCapturerOptions = {
streamInfo: audioStreamInfo,
capturerInfo: audioCapturerInfo
}
this.audioCapturer = await audio.createAudioCapturer(audioCapturerOptions);
console.log('AudioRecLog: Create audio capturer success.');
}
```
2. Use **start()** to start audio recording.
The capturer state will be **STATE_RUNNING** once the audio capturer is started. The application can then begin reading buffers.
```js
async startCapturer() {
let state = this.audioCapturer.state;
// The audio capturer should be in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state after being started.
if (state == audio.AudioState.STATE_PREPARED || state == audio.AudioState.STATE_PAUSED ||
state == audio.AudioState.STATE_STOPPED) {
await this.audioCapturer.start();
state = this.audioCapturer.state;
if (state == audio.AudioState.STATE_RUNNING) {
console.info('AudioRecLog: Capturer started');
} else {
console.error('AudioRecLog: Capturer start failed');
}
}
}
```
3. Read the captured audio data and convert it to a byte stream. Call **read()** repeatedly to read the data until the application stops the recording.
The following example shows how to write recorded data into a file.
```js
async readData(){
let state = this.audioCapturer.state;
// The read operation can be performed only when the state is STATE_RUNNING.
if (state != audio.AudioState.STATE_RUNNING) {
console.info('Capturer is not in a correct state to read');
return;
}
const path = '/data/data/.pulse_dir/capture_js.wav'; // Path for storing the collected audio file.
let file = fs.openSync(path, 0o2);
let fd = file.fd;
if (file !== null) {
console.info('AudioRecLog: file created');
} else {
console.info('AudioRecLog: file create : FAILED');
return;
}
if (fd !== null) {
console.info('AudioRecLog: file fd opened in append mode');
}
let numBuffersToCapture = 150; // Write data for 150 times.
let count = 0;
while (numBuffersToCapture) {
this.bufferSize = await this.audioCapturer.getBufferSize();
let buffer = await this.audioCapturer.read(this.bufferSize, true);
let options = {
offset: count * this.bufferSize,
length: this.bufferSize
}
if (typeof(buffer) == undefined) {
console.info('AudioRecLog: read buffer failed');
} else {
let number = fs.writeSync(fd, buffer, options);
console.info(`AudioRecLog: data written: ${number}`);
}
numBuffersToCapture--;
count++;
}
}
```
4. Once the recording is complete, call **stop()** to stop the recording.
```js
async StopCapturer() {
let state = this.audioCapturer.state;
// The audio capturer can be stopped only when it is in STATE_RUNNING or STATE_PAUSED state.
if (state != audio.AudioState.STATE_RUNNING && state != audio.AudioState.STATE_PAUSED) {
console.info('AudioRecLog: Capturer is not running or paused');
return;
}
await this.audioCapturer.stop();
state = this.audioCapturer.state;
if (state == audio.AudioState.STATE_STOPPED) {
console.info('AudioRecLog: Capturer stopped');
} else {
console.error('AudioRecLog: Capturer stop failed');
}
}
```
5. After the task is complete, call **release()** to release related resources.
```js
async releaseCapturer() {
let state = this.audioCapturer.state;
// The audio capturer can be released only when it is not in the STATE_RELEASED or STATE_NEW state.
if (state == audio.AudioState.STATE_RELEASED || state == audio.AudioState.STATE_NEW) {
console.info('AudioRecLog: Capturer already released');
return;
}
await this.audioCapturer.release();
state = this.audioCapturer.state;
if (state == audio.AudioState.STATE_RELEASED) {
console.info('AudioRecLog: Capturer released');
} else {
console.info('AudioRecLog: Capturer release failed');
}
}
```
6. (Optional) Obtain the audio capturer information.
You can use the following code to obtain the audio capturer information:
```js
async getAudioCapturerInfo(){
// Obtain the audio capturer state.
let state = this.audioCapturer.state;
// Obtain the audio capturer information.
let audioCapturerInfo : audio.AudioCapturerInfo = await this.audioCapturer.getCapturerInfo();
// Obtain the audio stream information.
let audioStreamInfo : audio.AudioStreamInfo = await this.audioCapturer.getStreamInfo();
// Obtain the audio stream ID.
let audioStreamId : number = await this.audioCapturer.getAudioStreamId();
// Obtain the Unix timestamp, in nanoseconds.
let audioTime : number = await this.audioCapturer.getAudioTime();
// Obtain a proper minimum buffer size.
let bufferSize : number = await this.audioCapturer.getBufferSize();
}
```
7. (Optional) Use **on('markReach')** to subscribe to the mark reached event, and use **off('markReach')** to unsubscribe from the event.
After the mark reached event is subscribed to, when the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned.
```js
async markReach(){
this.audioCapturer.on('markReach', 10, (reachNumber) => {
console.info('Mark reach event Received');
console.info(`The Capturer reached frame: ${reachNumber}`);
});
this.audioCapturer.off('markReach'); // Unsubscribe from the mark reached event. This event will no longer be listened for.
}
```
8. (Optional) Use **on('periodReach')** to subscribe to the period reached event, and use **off('periodReach')** to unsubscribe from the event.
After the period reached event is subscribed to, each time the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned.
```js
async periodReach(){
this.audioCapturer.on('periodReach', 10, (reachNumber) => {
console.info('Period reach event Received');
console.info(`In this period, the Capturer reached frame: ${reachNumber}`);
});
this.audioCapturer.off('periodReach'); // Unsubscribe from the period reached event. This event will no longer be listened for.
}
```
9. If your application needs to perform some operations when the audio capturer state is updated, it can subscribe to the state change event. When the audio capturer state is updated, the application receives a callback containing the event type.
```js
async stateChange(){
this.audioCapturer.on('stateChange', (state) => {
console.info(`AudioCapturerLog: Changed State to : ${state}`)
switch (state) {
case audio.AudioState.STATE_PREPARED:
console.info('--------CHANGE IN AUDIO STATE----------PREPARED--------------');
console.info('Audio State is : Prepared');
break;
case audio.AudioState.STATE_RUNNING:
console.info('--------CHANGE IN AUDIO STATE----------RUNNING--------------');
console.info('Audio State is : Running');
break;
case audio.AudioState.STATE_STOPPED:
console.info('--------CHANGE IN AUDIO STATE----------STOPPED--------------');
console.info('Audio State is : stopped');
break;
case audio.AudioState.STATE_RELEASED:
console.info('--------CHANGE IN AUDIO STATE----------RELEASED--------------');
console.info('Audio State is : released');
break;
default:
console.info('--------CHANGE IN AUDIO STATE----------INVALID--------------');
console.info('Audio State is : invalid');
break;
}
});
}
```
# Audio Input Device Management
If a device is connected to multiple audio input devices, you can use **AudioRoutingManager** to specify an audio input device to record audio. For details about the API reference, see [AudioRoutingManager](../reference/apis/js-apis-audio.md#audioroutingmanager9).
## Creating an AudioRoutingManager Instance
Before using **AudioRoutingManager** to manage audio devices, import the audio module and create an **AudioManager** instance.
```ts
import audio from '@ohos.multimedia.audio'; // Import the audio module.
let audioManager = audio.getAudioManager(); // Create an AudioManager instance.
let audioRoutingManager = audioManager.getRoutingManager(); // Call an API of AudioManager to create an AudioRoutingManager instance.
```
## Supported Audio Input Device Types
The table below lists the supported audio input devices.
| Name| Value| Description|
| -------- | -------- | -------- |
| WIRED_HEADSET | 3 | Wired headset with a microphone.|
| BLUETOOTH_SCO | 7 | Bluetooth device using Synchronous Connection Oriented (SCO) links.|
| MIC | 15 | Microphone.|
| USB_HEADSET | 22 | USB Type-C headset.|
## Obtaining Input Device Information
Use **getDevices()** to obtain information about all the input devices.
```ts
audioRoutingManager.getDevices(audio.DeviceFlag.INPUT_DEVICES_FLAG).then((data) => {
console.info('Promise returned to indicate that the device list is obtained.');
});
```
## Listening for Device Connection State Changes
Set a listener to listen for changes of the device connection state. When a device is connected or disconnected, a callback is triggered.
```ts
// Listen for connection state changes of audio devices.
audioRoutingManager.on('deviceChange', audio.DeviceFlag.INPUT_DEVICES_FLAG, (deviceChanged) => {
console.info('device change type: ' + deviceChanged.type); // Device connection state change. The value 0 means that the device is connected and 1 means that the device is disconnected.
console.info('device descriptor size : ' + deviceChanged.deviceDescriptors.length);
console.info('device change descriptor: ' + deviceChanged.deviceDescriptors[0].deviceRole); // Device role.
console.info('device change descriptor: ' + deviceChanged.deviceDescriptors[0].deviceType); // Device type.
});
// Cancel the listener for the connection state changes of audio devices.
audioRoutingManager.off('deviceChange', (deviceChanged) => {
console.info('Should be no callback.');
});
```
## Selecting an Audio Input Device (for System Applications only)
Currently, only one input device can be selected, and the device ID is used as the unique identifier. For details about audio device descriptors, see [AudioDeviceDescriptors](../reference/apis/js-apis-audio.md#audiodevicedescriptors).
> **NOTE**
>
> The user can connect to a group of audio devices (for example, a pair of Bluetooth headsets), but the system treats them as one device (a group of devices that share the same device ID).
```ts
let inputAudioDeviceDescriptor = [{
deviceRole : audio.DeviceRole.INPUT_DEVICE,
deviceType : audio.DeviceType.EARPIECE,
id : 1,
name : "",
address : "",
sampleRates : [44100],
channelCounts : [2],
channelMasks : [0],
networkId : audio.LOCAL_NETWORK_ID,
interruptGroupId : 1,
volumeGroupId : 1,
}];
async function getRoutingManager(){
audioRoutingManager.selectInputDevice(inputAudioDeviceDescriptor).then(() => {
console.info('Invoke selectInputDevice succeeded.');
}).catch((err) => {
console.error(`Invoke selectInputDevice failed, code is ${err.code}, message is ${err.message}`);
});
}
```
# Audio Interruption Mode Development
## Introduction
The audio interruption mode is used to control the playback of multiple audio streams.
Audio applications can set the audio interruption mode to independent or shared under **AudioRenderer**.
In shared mode, multiple audio streams share one session ID. In independent mode, each audio stream has an independent session ID.
**Asynchronous operation**: To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions.
## How to Develop
For details about the APIs, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8).
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.
Set parameters of the **AudioRenderer** instance in **audioRendererOptions**.
This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.
```js
import audio from '@ohos.multimedia.audio';
var audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
}
var audioRendererInfo = {
content: audio.ContentType.CONTENT_TYPE_SPEECH,
usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION,
rendererFlags: 1
}
var audioRendererOptions = {
streamInfo: audioStreamInfo,
rendererInfo: audioRendererInfo
}
let audioRenderer = await audio.createAudioRenderer(audioRendererOptions);
```
2. Set the audio interruption mode.
After the **AudioRenderer** instance is initialized, you can set the audio interruption mode.<br>
```js
var mode_ = audio.InterruptMode.SHARE_MODE;
await this.audioRenderer.setInterruptMode(mode_).then(() => {
console.log('[JSAR] [SetInterruptMode] Setting: '+ (mode_ == 0? " share mode":"independent mode") + "success");
});
```
# Audio Output Device Management
If a device is connected to multiple audio output devices, you can use **AudioRoutingManager** to specify an audio output device to play audio. For details about the API reference, see [AudioRoutingManager](../reference/apis/js-apis-audio.md#audioroutingmanager9).
## Creating an AudioRoutingManager Instance
Before using **AudioRoutingManager** to manage audio devices, import the audio module and create an **AudioManager** instance.
```ts
import audio from '@ohos.multimedia.audio'; // Import the audio module.
let audioManager = audio.getAudioManager(); // Create an AudioManager instance.
let audioRoutingManager = audioManager.getRoutingManager(); // Call an API of AudioManager to create an AudioRoutingManager instance.
```
## Supported Audio Output Device Types
The table below lists the supported audio output devices.
| Name| Value| Description|
| -------- | -------- | -------- |
| EARPIECE | 1 | Earpiece.|
| SPEAKER | 2 | Speaker.|
| WIRED_HEADSET | 3 | Wired headset with a microphone.|
| WIRED_HEADPHONES | 4 | Wired headset without microphone.|
| BLUETOOTH_SCO | 7 | Bluetooth device using Synchronous Connection Oriented (SCO) links.|
| BLUETOOTH_A2DP | 8 | Bluetooth device using Advanced Audio Distribution Profile (A2DP) links.|
| USB_HEADSET | 22 | USB Type-C headset.|
## Obtaining Output Device Information
Use **getDevices()** to obtain information about all the output devices.
```ts
audioRoutingManager.getDevices(audio.DeviceFlag.OUTPUT_DEVICES_FLAG).then((data) => {
console.info('Promise returned to indicate that the device list is obtained.');
});
```
## Listening for Device Connection State Changes
Set a listener to listen for changes of the device connection state. When a device is connected or disconnected, a callback is triggered.
```ts
// Listen for connection state changes of audio devices.
audioRoutingManager.on('deviceChange', audio.DeviceFlag.OUTPUT_DEVICES_FLAG, (deviceChanged) => {
console.info('device change type: ' + deviceChanged.type); // Device connection state change. The value 0 means that the device is connected and 1 means that the device is disconnected.
console.info('device descriptor size : ' + deviceChanged.deviceDescriptors.length);
console.info('device change descriptor: ' + deviceChanged.deviceDescriptors[0].deviceRole); // Device role.
console.info('device change descriptor: ' + deviceChanged.deviceDescriptors[0].deviceType); // Device type.
});
// Cancel the listener for the connection state changes of audio devices.
audioRoutingManager.off('deviceChange', (deviceChanged) => {
console.info('Should be no callback.');
});
```
## Selecting an Audio Output Device (for System Applications only)
Currently, only one output device can be selected, and the device ID is used as the unique identifier. For details about audio device descriptors, see [AudioDeviceDescriptors](../reference/apis/js-apis-audio.md#audiodevicedescriptors).
> **NOTE**
>
> The user can connect to a group of audio devices (for example, a pair of Bluetooth headsets), but the system treats them as one device (a group of devices that share the same device ID).
```ts
let outputAudioDeviceDescriptor = [{
deviceRole : audio.DeviceRole.OUTPUT_DEVICE,
deviceType : audio.DeviceType.SPEAKER,
id : 1,
name : "",
address : "",
sampleRates : [44100],
channelCounts : [2],
channelMasks : [0],
networkId : audio.LOCAL_NETWORK_ID,
interruptGroupId : 1,
volumeGroupId : 1,
}];
async function selectOutputDevice(){
audioRoutingManager.selectOutputDevice(outputAudioDeviceDescriptor).then(() => {
console.info('Invoke selectOutputDevice succeeded.');
}).catch((err) => {
console.error(`Invoke selectOutputDevice failed, code is ${err.code}, message is ${err.message}`);
});
}
```
# Audio Overview
You can use APIs provided by the audio module to implement audio-related features, including audio playback and volume management.
## Basic Concepts
- **Sampling**
Sampling is a process to obtain discrete-time signals by extracting samples from analog signals in a continuous time domain at a specific interval.
- **Sampling rate**
Sampling rate is the number of samples extracted from a continuous signal per second to form a discrete signal. It is measured in Hz. Generally, human hearing range is from 20 Hz to 20 kHz. Common audio sampling rates include 8 kHz, 11.025 kHz, 22.05 kHz, 16 kHz, 37.8 kHz, 44.1 kHz, 48 kHz, 96 kHz, and 192 kHz.
- **Channel**
Channels refer to different spatial positions where independent audio signals are recorded or played. The number of channels is the number of audio sources used during audio recording, or the number of speakers used for audio playback.
- **Audio frame**
Audio data is in stream form. For the convenience of audio algorithm processing and transmission, it is generally agreed that a data amount in a unit of 2.5 to 60 milliseconds is one audio frame. This unit is called sampling time, and its length is specific to codecs and the application requirements.
- **PCM**<br>
Pulse code modulation (PCM) is a method used to digitally represent sampled analog signals. It converts continuous-time analog signals into discrete-time digital signal samples.
# Audio Playback Concurrency Policy
## Audio Interruption Policy
If multiple audio streams are played at the same time, the user may feel uncomfortable or even painful. To address this issue, OpenHarmony presets the audio interruption policy so that only the audio stream holding audio focus can be played.
When an application attempts to play an audio, the system requests audio focus for the audio stream. The audio stream that gains the focus can be played. If the request is rejected, the audio stream cannot be played. If the audio stream is interrupted by another, it loses the focus and therefore the playback is paused. All these actions are automatically performed by the system and do not require additional operations on the application. However, to maintain state consistency between the application and the system and ensure good user experience, it is recommended that the application [listen for the audio interruption event](#listening-for-the-audio-interruption-event) and perform the corresponding processing when receiving such an event (specified by [InterruptEvent](../reference/apis/js-apis-audio.md#interruptevent9)).
OpenHarmony presets two [audio interruption modes](#audio-interruption-mode) to specify whether audio concurrency is controlled by the application or system. You can choose a mode for each of the audio streams created by the same application.
The audio interruption policy determines the operations (for example, pause, resume, duck, or unduck) to be performed on the audio stream. These operations can be performed by the system or application. To distinguish the body that executes the operations, the [audio interruption type](#audio-interruption-type) is introduced, and two audio interruption types are preset.
### Audio Interruption Mode
Two audio interruption modes, specified by [InterruptMode](../reference/apis/js-apis-audio.md#interruptmode9), are preset in the audio interruption policy:
- **SHARED_MODE**: Multiple audio streams created by an application share one audio focus. The concurrency rules between these audio streams are determined by the application, without the use of the audio interruption policy. However, if another application needs to play audio while one of these audio streams is being played, the audio interruption policy is triggered.
- **INDEPENDENT_MODE**: Each audio stream created by an application has an independent audio focus. When multiple audio streams are played concurrently, the audio interruption policy is triggered.
The application can select an audio interruption mode as required. By default, the **SHARED_MODE** is used.
You can set the audio interruption mode in either of the following ways:
- If you [use the AVPlayer to develop audio playback](using-avplayer-for-playback.md), set the [audioInterruptMode](../reference/apis/js-apis-media.md#avplayer9) attribute of the AVPlayer to set the audio interruption mode.
- If you [use the AudioRenderer to develop audio playback](using-audiorenderer-for-playback.md), call [setInterruptMode](../reference/apis/js-apis-audio.md#setinterruptmode9) of the AudioRenderer to set the audio interruption mode.
### Audio Interruption Type
The audio interruption policy (containing two audio interruption modes) determines the operation to be performed on each audio stream. These operations can be carried out by the system or application. To distinguish the executors, the audio interruption type, specified by [InterruptForceType](../reference/apis/js-apis-audio.md#interruptforcetype9), is introduced.
- **INTERRUPT_FORCE**: The operation is performed by the system. The system forcibly interrupts audio playback.
- **INTERRUPT_SHARE**: The operation is performed by the application. The application can take action or ignore as required.
For the pause operation, the **INTERRUPT_FORCE** type is always used and cannot be changed by the application. However, the application can choose to use **INTERRUPT_SHARE** for other operations, such as the resume operation. The application can obtain the audio interruption type based on the value of the member variable **forceType** in the audio interruption event.
During audio playback, the system automatically requests, holds, and releases the focus for the audio stream. When audio interruption occurs, the system forcibly pauses or stops playing or ducks the volume down for the audio stream, and sends an audio interruption event callback to the application. To maintain state consistency between the application and the system and ensure good user experience, it is recommended that the application [listen for the audio interruption event](#listening-for-the-audio-interruption-event) and perform processing when receiving such an event.
For operations that cannot be forcibly performed by the system (for example, resume), the system sends the audio interruption event containing **INTERRUPT_SHARE**, and the application can choose to take action or ignore.
## Listening for the Audio Interruption Event
Your application are advised to listen for the audio interruption event when playing audio. When audio interruption occurs, the system performs processing on the audio stream according to the preset policy, and sends the audio interruption event to the application.
Upon the receipt of the event, the application carries out processing based on the event content to ensure that the application state is consistent with the expected effect.
You can use either of the following methods to listen for the audio interruption event:
- If you [use the AVPlayer to develop audio playback](using-avplayer-for-playback.md), call [on('audioInterrupt')](../reference/apis/js-apis-media.md#onaudiointerrupt9) of the AVPlayer to listen for the event.
- If you [use the AudioRenderer to develop audio playback](using-audiorenderer-for-playback.md), call [on('audioInterrupt')](../reference/apis/js-apis-audio.md#onaudiointerrupt9) of the AudioRenderer to listen for the event.
To deliver an optimal user experience, the application needs to perform processing based on the event content. The following uses the AudioRenderer as an example to describe the recommended application processing. (The recommended processing is similar if the AVPlayer is used to develop audio playback.) You can customize the code to implement your own audio playback functionality or application processing based on service requirements.
```ts
let isPlay; // An identifier specifying whether the audio stream is being played. In actual development, this parameter corresponds to the module related to the audio playback state.
let isDucked; // An identifier specifying whether to duck the volume down. In actual development, this parameter corresponds to the module related to the audio volume.
let started; // An identifier specifying whether the start operation is successful.
async function onAudioInterrupt(){
// The AudioRenderer is used as an example to describe how to develop audio playback. The audioRenderer variable is the AudioRenderer instance created for playback.
audioRenderer.on('audioInterrupt', async(interruptEvent) => {
// When an audio interruption event occurs, the audioRenderer receives the interruptEvent callback and performs processing based on the content in the callback.
// The audioRenderer reads the value of interruptEvent.forceType to see whether the system has forcibly performed the operation.
// The audioRenderer then reads the value of interruptEvent.hintType and performs corresponding processing.
if (interruptEvent.forceType === audio.InterruptForceType.INTERRUPT_FORCE) {
// If the value of interruptEvent.forceType is INTERRUPT_FORCE, the system has performed audio-related processing, and the application needs to update its state and make adjustments accordingly.
switch (interruptEvent.hintType) {
case audio.InterruptHint.INTERRUPT_HINT_PAUSE:
// The system has paused the audio stream (the focus is temporarily lost). To ensure state consistency, the application needs to switch to the audio paused state.
// Temporarily losing the focus: After the other audio stream releases the focus, the current audio stream will receive the audio interruption event corresponding to resume and automatically resume the playback.
isPlay = false; // A simplified processing indicating several operations for switching the application to the audio paused state.
break;
case audio.InterruptHint.INTERRUPT_HINT_STOP:
// The system has stopped the audio stream (the focus is permanently lost). To ensure state consistency, the application needs to switch to the audio paused state.
// Permanently losing the focus: No audio interruption event will be received. The user must manually trigger the operation to resume playback.
isPlay = false; // A simplified processing indicating several operations for switching the application to the audio paused state.
break;
case audio.InterruptHint.INTERRUPT_HINT_DUCK:
// The system has ducked the volume down (20% of the normal volume by default). To ensure state consistency, the application needs to switch to the volume decreased state.
// If the application does not want to play at a lower volume, it can select another processing mode, for example, proactively pausing the playback.
isDucked = true; // A simplified processing indicating several operations for switching the application to the volume decreased state.
break;
case audio.InterruptHint.INTERRUPT_HINT_UNDUCK:
// The system has restored the audio volume to normal. To ensure state consistency, the application needs to switch to the normal volume state.
isDucked = false; // A simplified processing indicating several operations for switching the application to the normal volume state.
break;
default:
break;
}
} else if (interruptEvent.forceType === audio.InterruptForceType.INTERRUPT_SHARE) {
// If the value of interruptEvent.forceType is INTERRUPT_SHARE, the application can take action or ignore as required.
switch (interruptEvent.hintType) {
case audio.InterruptHint.INTERRUPT_HINT_RESUME:
// The paused audio stream can be played. It is recommended that the application continue to play the audio stream and switch to the audio playing state.
// If the application does not want to continue the playback, it can ignore the event.
// To continue the playback, the application needs to call start(), and use the identifier variable started to record the execution result of start().
await audioRenderer.start().then(async function () {
started = true; // Calling start() is successful.
}).catch((err) => {
started = false; // Calling start() fails.
});
// If calling start() is successful, the application needs to switch to the audio playing state.
if (started) {
isPlay = true; // A simplified processing indicating several operations for switching the application to the audio playing state.
} else {
// Resuming the audio playback fails.
}
break;
default:
break;
}
}
});
}
```
# Audio Playback Development
## Selecting an Audio Playback Development Mode
OpenHarmony provides multiple classes for you to develop audio playback applications. You can select them based on the audio data formats, audio sources, audio usage scenarios, and even the programming language you use. Selecting a suitable class helps you reduce development workload and your application deliver a better effect.
- [AVPlayer](using-avplayer-for-playback.md): provides ArkTS and JS APIs to implement audio and video playback. It also supports parsing streaming media and local assets, decapsulating media assets, decoding audio, and outputting audio. It can play audio files in MP3 and M4A formats, but not in PCM format.
- [AudioRenderer](using-audiorenderer-for-playback.md): provides ArkTS and JS API to implement audio output. It supports only the PCM format and requires applications to continuously write audio data. The applications can perform data preprocessing, for example, setting the sampling rate and bit width of audio files, before audio input. This class can be used to develop more professional and diverse playback applications. To use this class, you must have basic audio processing knowledge.
- [OpenSLES](using-opensl-es-for-playback.md): provides a set of standard, cross-platform, yet unique native audio APIs. It supports audio output in PCM format and is applicable to playback applications that are ported from other embedded platforms or that implements audio output at the native layer.
- [TonePlayer](using-toneplayer-for-playback.md): provides ArkTS and JS API to implement the playback of dialing tones and ringback tones. It can be used to play the content selected from a fixed type range, without requiring the input of media assets or audio data. This class is application to specific scenarios where dialing tones and ringback tones are played. is available only to system applications.
- Applications often need to use short sound effects, such as camera shutter sound effect, key press sound effect, and game shooting sound effect. Currently, only the **AVPlayer** class can implement audio file playback. More APIs will be provided to support this scenario in later versions.
## Precautions for Developing Audio Playback Applications
To enable your application to play a video in the background or when the screen is off, the application must meet the following conditions:
1. The application is registered with the system for unified management through the **AVSession** APIs. Otherwise, the playback will be forcibly stopped when the application switches to the background. For details, see [AVSession Development](avsession-overview.md).
2. The application must request a continuous task to prevent from being suspended. For details, see [Continuous Task Development](../task-management/continuous-task-dev-guide.md).
If the playback is interrupted when the application switches to the background, you can view the log to see whether the application has requested a continuous task. If the application has requested a continuous task, there is no log recording **pause id**; otherwise, there is a log recording **pause id**.
# Audio Playback Stream Management
An audio playback application must notice audio stream state changes and perform corresponding operations. For example, when detecting that an audio stream is being played or paused, the application must change the UI display of the **Play** button.
## Reading or Listening for Audio Stream State Changes in the Application
Create an AudioRenderer by referring to [Using AudioRenderer for Audio Playback](using-audiorenderer-for-playback.md) or [audio.createAudioRenderer](../reference/apis/js-apis-audio.md#audiocreateaudiorenderer8). Then obtain the audio stream state changes in either of the following ways:
- Check the [state](../reference/apis/js-apis-audio.md#attributes) of the AudioRenderer.
```ts
let audioRendererState = audioRenderer.state;
console.info(`Current state is: ${audioRendererState }`)
```
- Register **stateChange** to listen for state changes of the AudioRenderer.
```ts
audioRenderer.on('stateChange', (rendererState) => {
console.info(`State change to: ${rendererState}`)
});
```
The application then performs an operation, for example, changing the display of the **Play** button, by comparing the obtained state with [AudioState](../reference/apis/js-apis-audio.md#audiostate8).
## Reading or Listening for Changes in All Audio Streams
If an application needs to obtain the change information about all audio streams, it can use **AudioStreamManager** to read or listen for the changes of all audio streams.
> **NOTE**
>
> The audio stream change information marked as the system API can be viewed only by system applications.
The figure below shows the call relationship of audio stream management.
![Call relationship of audio stream management](figures/audio-stream-mgmt-invoking-relationship.png)
During application development, first use **getStreamManager()** to create an **AudioStreamManager** instance. Then call **on('audioRendererChange')** to listen for audio stream changes and obtain a notification when the audio stream state or device changes. To cancel the listening for these changes, call **off('audioRendererChange')**. You can also call **getCurrentAudioRendererInfoArray()** to obtain information such as the unique ID of the playback stream, UID of the playback stream client, and stream status.
For details about the APIs, see [AudioStreamManager](../reference/apis/js-apis-audio.md#audiostreammanager9).
## How to Develop
1. Create an **AudioStreamManager** instance.
Before using **AudioStreamManager** APIs, you must use **getStreamManager()** to create an **AudioStreamManager** instance.
```ts
import audio from '@ohos.multimedia.audio';
let audioManager = audio.getAudioManager();
let audioStreamManager = audioManager.getStreamManager();
```
2. Use **on('audioRendererChange')** to listen for audio playback stream changes. If the application needs to receive a notification when the audio playback stream state or device changes, it can subscribe to this event.
```ts
audioStreamManager.on('audioRendererChange', (AudioRendererChangeInfoArray) => {
for (let i = 0; i < AudioRendererChangeInfoArray.length; i++) {
let AudioRendererChangeInfo = AudioRendererChangeInfoArray[i];
console.info(`## RendererChange on is called for ${i} ##`);
console.info(`StreamId for ${i} is: ${AudioRendererChangeInfo.streamId}`);
console.info(`Content ${i} is: ${AudioRendererChangeInfo.rendererInfo.content}`);
console.info(`Stream ${i} is: ${AudioRendererChangeInfo.rendererInfo.usage}`);
console.info(`Flag ${i} is: ${AudioRendererChangeInfo.rendererInfo.rendererFlags}`);
for (let j = 0;j < AudioRendererChangeInfo.deviceDescriptors.length; j++) {
console.info(`Id: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].id}`);
console.info(`Type: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].deviceType}`);
console.info(`Role: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].deviceRole}`);
console.info(`Name: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].name}`);
console.info(`Address: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].address}`);
console.info(`SampleRates: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].sampleRates[0]}`);
console.info(`ChannelCount ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].channelCounts[0]}`);
console.info(`ChannelMask: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].channelMasks}`);
}
}
});
```
3. (Optional) Use **off('audioRendererChange')** to cancel listening for audio playback stream changes.
```ts
audioStreamManager.off('audioRendererChange');
console.info('RendererChange Off is called ');
```
4. (Optional) Call **getCurrentAudioRendererInfoArray()** to obtain the information about all audio playback streams.
This API can be used to obtain the unique ID of the audio playback stream, UID of the audio playback client, audio status, and other information about the audio player.
> **NOTE**
>
> Before listening for state changes of all audio streams, the application must request the **ohos.permission.USE_BLUETOOTH** [permission](../security/accesstoken-guidelines.md), for the device name and device address (Bluetooth related attributes) to be displayed correctly.
```ts
async function getCurrentAudioRendererInfoArray(){
await audioStreamManager.getCurrentAudioRendererInfoArray().then( function (AudioRendererChangeInfoArray) {
console.info(`getCurrentAudioRendererInfoArray Get Promise is called `);
if (AudioRendererChangeInfoArray != null) {
for (let i = 0; i < AudioRendererChangeInfoArray.length; i++) {
let AudioRendererChangeInfo = AudioRendererChangeInfoArray[i];
console.info(`StreamId for ${i} is: ${AudioRendererChangeInfo.streamId}`);
console.info(`Content ${i} is: ${AudioRendererChangeInfo.rendererInfo.content}`);
console.info(`Stream ${i} is: ${AudioRendererChangeInfo.rendererInfo.usage}`);
console.info(`Flag ${i} is: ${AudioRendererChangeInfo.rendererInfo.rendererFlags}`);
for (let j = 0;j < AudioRendererChangeInfo.deviceDescriptors.length; j++) {
console.info(`Id: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].id}`);
console.info(`Type: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].deviceType}`);
console.info(`Role: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].deviceRole}`);
console.info(`Name: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].name}`);
console.info(`Address: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].address}`);
console.info(`SampleRates: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].sampleRates[0]}`);
console.info(`ChannelCount ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].channelCounts[0]}`);
console.info(`ChannelMask: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].channelMasks}`);
}
}
}
}).catch((err) => {
console.error(`Invoke getCurrentAudioRendererInfoArray failed, code is ${err.code}, message is ${err.message}`);
});
}
```
# Audio Playback Development
## Introduction
You can use audio playback APIs to convert audio data into audible analog signals and play the signals using output devices. You can also manage playback tasks. For example, you can control the playback and volume, obtain track information, and release resources.
## Working Principles
The following figures show the audio playback state transition and the interaction with external modules for audio playback.
**Figure 1** Audio playback state transition
![en-us_image_audio_state_machine](figures/en-us_image_audio_state_machine.png)
**NOTE**: If the status is **Idle**, setting the **src** attribute does not change the status. In addition, after the **src** attribute is set successfully, you must call **reset()** before setting it to another value.
**Figure 2** Interaction with external modules for audio playback
![en-us_image_audio_player](figures/en-us_image_audio_player.png)
**NOTE**: When a third-party application calls the JS interface provided by the JS interface layer to implement a feature, the framework layer invokes the audio component through the media service of the native framework and outputs the audio data decoded by the software to the audio HDI of the hardware interface layer to implement audio playback.
## How to Develop
For details about the APIs, see [AudioPlayer in the Media API](../reference/apis/js-apis-media.md#audioplayer).
> **NOTE**
>
> The method for obtaining the path in the FA model is different from that in the stage model. For details about how to obtain the path, see [Application Sandbox Path Guidelines](../reference/apis/js-apis-fileio.md#guidelines).
### Full-Process Scenario
The full audio playback process includes creating an instance, setting the URI, playing audio, seeking to the playback position, setting the volume, pausing playback, obtaining track information, stopping playback, resetting the player, and releasing resources.
For details about the **src** types supported by **AudioPlayer**, see the [src attribute](../reference/apis/js-apis-media.md#audioplayer_attributes).
```js
import media from '@ohos.multimedia.media'
import fs from '@ohos.file.fs'
// Print the stream track information.
function printfDescription(obj) {
for (let item in obj) {
let property = obj[item];
console.info('audio key is ' + item);
console.info('audio value is ' + property);
}
}
// Set the player callbacks.
function setCallBack(audioPlayer) {
audioPlayer.on('dataLoad', () => { // Set the 'dataLoad' event callback, which is triggered when the src attribute is set successfully.
console.info('audio set source success');
audioPlayer.play(); // The play() API can be invoked only after the 'dataLoad' event callback is complete. The 'play' event callback is then triggered.
});
audioPlayer.on('play', () => { // Set the 'play' event callback.
console.info('audio play success');
audioPlayer.pause(); // Trigger the 'pause' event callback and pause the playback.
});
audioPlayer.on('pause', () => { // Set the 'pause' event callback.
console.info('audio pause success');
audioPlayer.seek(5000); // Trigger the 'timeUpdate' event callback, and seek to 5000 ms for playback.
});
audioPlayer.on('stop', () => { // Set the 'stop' event callback.
console.info('audio stop success');
audioPlayer.reset(); // Trigger the 'reset' event callback, and reconfigure the src attribute to switch to the next song.
});
audioPlayer.on('reset', () => { // Set the 'reset' event callback.
console.info('audio reset success');
audioPlayer.release(); // Release the AudioPlayer instance.
audioPlayer = undefined;
});
audioPlayer.on('timeUpdate', (seekDoneTime) => { // Set the 'timeUpdate' event callback.
if (typeof(seekDoneTime) == 'undefined') {
console.info('audio seek fail');
return;
}
console.info('audio seek success, and seek time is ' + seekDoneTime);
audioPlayer.setVolume(0.5); // Trigger the 'volumeChange' event callback.
});
audioPlayer.on('volumeChange', () => { // Set the 'volumeChange' event callback.
console.info('audio volumeChange success');
audioPlayer.getTrackDescription((error, arrlist) => { // Obtain the audio track information in callback mode.
if (typeof (arrlist) != 'undefined') {
for (let i = 0; i < arrlist.length; i++) {
printfDescription(arrlist[i]);
}
} else {
console.log(`audio getTrackDescription fail, error:${error.message}`);
}
audioPlayer.stop(); // Trigger the 'stop' event callback to stop the playback.
});
});
audioPlayer.on('finish', () => { // Set the 'finish' event callback, which is triggered when the playback is complete.
console.info('audio play finish');
});
audioPlayer.on('error', (error) => { // Set the 'error' event callback.
console.info(`audio error called, errName is ${error.name}`);
console.info(`audio error called, errCode is ${error.code}`);
console.info(`audio error called, errMessage is ${error.message}`);
});
}
async function audioPlayerDemo() {
// 1. Create an AudioPlayer instance.
let audioPlayer = media.createAudioPlayer();
setCallBack(audioPlayer); // Set the event callbacks.
// 2. Set the URI of the audio file.
let fdPath = 'fd://'
let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
let file = await fs.open(path);
fdPath = fdPath + '' + file.fd;
audioPlayer.src = fdPath; // Set the src attribute and trigger the 'dataLoad' event callback.
}
```
### Normal Playback Scenario
```js
import media from '@ohos.multimedia.media'
import fs from '@ohos.file.fs'
export class AudioDemo {
// Set the player callbacks.
setCallBack(audioPlayer) {
audioPlayer.on('dataLoad', () => { // Set the 'dataLoad' event callback, which is triggered when the src attribute is set successfully.
console.info('audio set source success');
audioPlayer.play(); // Call the play() API to start the playback and trigger the 'play' event callback.
});
audioPlayer.on('play', () => { // Set the 'play' event callback.
console.info('audio play success');
});
audioPlayer.on('finish', () => { // Set the 'finish' event callback, which is triggered when the playback is complete.
console.info('audio play finish');
audioPlayer.release(); // Release the AudioPlayer instance.
audioPlayer = undefined;
});
}
async audioPlayerDemo() {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://'
let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
let file = await fs.open(path);
fdPath = fdPath + '' + file.fd;
audioPlayer.src = fdPath; // Set the src attribute and trigger the 'dataLoad' event callback.
}
}
```
### Switching to the Next Song
```js
import media from '@ohos.multimedia.media'
import fs from '@ohos.file.fs'
export class AudioDemo {
// Set the player callbacks.
private isNextMusic = false;
setCallBack(audioPlayer) {
audioPlayer.on('dataLoad', () => { // Set the 'dataLoad' event callback, which is triggered when the src attribute is set successfully.
console.info('audio set source success');
audioPlayer.play(); // Call the play() API to start the playback and trigger the 'play' event callback.
});
audioPlayer.on('play', () => { // Set the 'play' event callback.
console.info('audio play success');
audioPlayer.reset(); // Call the reset() API and trigger the 'reset' event callback.
});
audioPlayer.on('reset', () => { // Set the 'reset' event callback.
console.info('audio play success');
if (!this.isNextMusic) { // When isNextMusic is false, changing songs is implemented.
this.nextMusic(audioPlayer); // Changing songs is implemented.
} else {
audioPlayer.release(); // Release the AudioPlayer instance.
audioPlayer = undefined;
}
});
}
async nextMusic(audioPlayer) {
this.isNextMusic = true;
let nextFdPath = 'fd://'
let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\02.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let nextpath = pathDir + '/02.mp3'
let nextFile = await fs.open(nextpath);
nextFdPath = nextFdPath + '' + nextFile.fd;
audioPlayer.src = nextFdPath; // Set the src attribute and trigger the 'dataLoad' event callback.
}
async audioPlayerDemo() {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://'
let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
let file = await fs.open(path);
fdPath = fdPath + '' + file.fd;
audioPlayer.src = fdPath; // Set the src attribute and trigger the 'dataLoad' event callback.
}
}
```
### Looping a Song
```js
import media from '@ohos.multimedia.media'
import fs from '@ohos.file.fs'
export class AudioDemo {
// Set the player callbacks.
setCallBack(audioPlayer) {
audioPlayer.on('dataLoad', () => { // Set the 'dataLoad' event callback, which is triggered when the src attribute is set successfully.
console.info('audio set source success');
audioPlayer.loop = true; // Set the loop playback attribute.
audioPlayer.play(); // Call the play() API to start the playback and trigger the 'play' event callback.
});
audioPlayer.on('play', () => { // Set the 'play' event callback to start loop playback.
console.info('audio play success');
});
}
async audioPlayerDemo() {
let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance.
this.setCallBack(audioPlayer); // Set the event callbacks.
let fdPath = 'fd://'
let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command.
let path = pathDir + '/01.mp3'
let file = await fs.open(path);
fdPath = fdPath + '' + file.fd;
audioPlayer.src = fdPath; // Set the src attribute and trigger the 'dataLoad' event callback.
}
}
```
# Audio Recording Development
## Introduction
During audio recording, audio signals are captured, encoded, and saved to files. You can specify parameters such as the sampling rate, number of audio channels, encoding format, encapsulation format, and output file path for audio recording.
## Working Principles
The following figures show the audio recording state transition and the interaction with external modules for audio recording.
**Figure 1** Audio recording state transition
![en-us_image_audio_recorder_state_machine](figures/en-us_image_audio_recorder_state_machine.png)
**Figure 2** Interaction with external modules for audio recording
![en-us_image_audio_recorder_zero](figures/en-us_image_audio_recorder_zero.png)
**NOTE**: When a third-party recording application or recorder calls the JS interface provided by the JS interface layer to implement a feature, the framework layer invokes the audio component through the media service of the native framework to obtain the audio data captured through the audio HDI. The framework layer then encodes the audio data through software and saves the encoded and encapsulated audio data to a file to implement audio recording.
## Constraints
Before developing audio recording, configure the **ohos.permission.MICROPHONE** permission for your application. For details about the configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop
For details about the APIs, see [AudioRecorder in the Media API](../reference/apis/js-apis-media.md#audiorecorder).
### Full-Process Scenario
The full audio recording process includes creating an instance, setting recording parameters, starting, pausing, resuming, and stopping recording, and releasing resources.
```js
import media from '@ohos.multimedia.media'
import mediaLibrary from '@ohos.multimedia.mediaLibrary'
export class AudioRecorderDemo {
private testFdNumber; // Used to save the FD address.
// Set the callbacks related to audio recording.
setCallBack(audioRecorder) {
audioRecorder.on('prepare', () => { // Set the prepare event callback.
console.log('prepare success');
audioRecorder.start(); // Call the start API to start recording and trigger the start event callback.
});
audioRecorder.on('start', () => { // Set the start event callback.
console.log('audio recorder start success');
audioRecorder.pause(); // Call the pause API to pause recording and trigger the pause event callback.
});
audioRecorder.on('pause', () => { // Set the pause event callback.
console.log('audio recorder pause success');
audioRecorder.resume(); // Call the resume API to resume recording and trigger the resume event callback.
});
audioRecorder.on('resume', () => { // Set the resume event callback.
console.log('audio recorder resume success');
audioRecorder.stop(); // Call the stop API to stop recording and trigger the stop event callback.
});
audioRecorder.on('stop', () => { // Set the stop event callback.
console.log('audio recorder stop success');
audioRecorder.reset(); // Call the reset API to reset the recorder and trigger the reset event callback.
});
audioRecorder.on('reset', () => { // Set the reset event callback.
console.log('audio recorder reset success');
audioRecorder.release(); // Call the release API to release resources and trigger the release event callback.
});
audioRecorder.on('release', () => { // Set the release event callback.
console.log('audio recorder release success');
audioRecorder = undefined;
});
audioRecorder.on('error', (error) => { // Set the error event callback.
console.info(`audio error called, errName is ${error.name}`);
console.info(`audio error called, errCode is ${error.code}`);
console.info(`audio error called, errMessage is ${error.message}`);
});
}
// pathName indicates the passed recording file name, for example, 01.mp3. The generated file address is /storage/media/100/local/files/Video/01.mp3.
// To use the media library, declare the following permissions: ohos.permission.MEDIA_LOCATION, ohos.permission.WRITE_MEDIA, and ohos.permission.READ_MEDIA.
async getFd(pathName) {
let displayName = pathName;
const mediaTest = mediaLibrary.getMediaLibrary();
let fileKeyObj = mediaLibrary.FileKey;
let mediaType = mediaLibrary.MediaType.VIDEO;
let publicPath = await mediaTest.getPublicDirectory(mediaLibrary.DirectoryType.DIR_VIDEO);
let dataUri = await mediaTest.createAsset(mediaType, displayName, publicPath);
if (dataUri != undefined) {
let args = dataUri.id.toString();
let fetchOp = {
selections : fileKeyObj.ID + "=?",
selectionArgs : [args],
}
let fetchFileResult = await mediaTest.getFileAssets(fetchOp);
let fileAsset = await fetchFileResult.getAllObject();
let fdNumber = await fileAsset[0].open('Rw');
this.testFdNumber = "fd://" + fdNumber.toString();
}
}
async audioRecorderDemo() {
// 1. Create an AudioRecorder instance.
let audioRecorder = media.createAudioRecorder();
// 2. Set the callbacks.
this.setCallBack(audioRecorder);
await this.getFd('01.mp3'); // Call the getFd method to obtain the FD address of the file to be recorded.
// 3. Set the recording parameters.
let audioRecorderConfig = {
audioEncodeBitRate : 22050,
audioSampleRate : 22050,
numberOfChannels : 2,
uri : this.testFdNumber, // testFdNumber is generated by getFd.
location : { latitude : 30, longitude : 130},
audioEncoderMime : media.CodecMimeType.AUDIO_AAC,
fileFormat : media.ContainerFormatType.CFT_MPEG_4A,
}
audioRecorder.prepare(audioRecorderConfig); // Call the prepare method to trigger the prepare event callback.
}
}
```
### Normal Recording Scenario
Unlike the full-process scenario, the normal recording scenario does not include the process of pausing and resuming recording.
```js
import media from '@ohos.multimedia.media'
import mediaLibrary from '@ohos.multimedia.mediaLibrary'
export class AudioRecorderDemo {
private testFdNumber; // Used to save the FD address.
// Set the callbacks related to audio recording.
setCallBack(audioRecorder) {
audioRecorder.on('prepare', () => { // Set the prepare event callback.
console.log('prepare success');
audioRecorder.start(); // Call the start API to start recording and trigger the start event callback.
});
audioRecorder.on('start', () => { // Set the start event callback.
console.log('audio recorder start success');
audioRecorder.stop(); // Call the stop API to stop recording and trigger the stop event callback.
});
audioRecorder.on('stop', () => { // Set the stop event callback.
console.log('audio recorder stop success');
audioRecorder.release(); // Call the release API to release resources and trigger the release event callback.
});
audioRecorder.on('release', () => { // Set the release event callback.
console.log('audio recorder release success');
audioRecorder = undefined;
});
audioRecorder.on('error', (error) => { // Set the error event callback.
console.info(`audio error called, errName is ${error.name}`);
console.info(`audio error called, errCode is ${error.code}`);
console.info(`audio error called, errMessage is ${error.message}`);
});
}
// pathName indicates the passed recording file name, for example, 01.mp3. The generated file address is /storage/media/100/local/files/Video/01.mp3.
// To use the media library, declare the following permissions: ohos.permission.MEDIA_LOCATION, ohos.permission.WRITE_MEDIA, and ohos.permission.READ_MEDIA.
async getFd(pathName) {
let displayName = pathName;
const mediaTest = mediaLibrary.getMediaLibrary();
let fileKeyObj = mediaLibrary.FileKey;
let mediaType = mediaLibrary.MediaType.VIDEO;
let publicPath = await mediaTest.getPublicDirectory(mediaLibrary.DirectoryType.DIR_VIDEO);
let dataUri = await mediaTest.createAsset(mediaType, displayName, publicPath);
if (dataUri != undefined) {
let args = dataUri.id.toString();
let fetchOp = {
selections : fileKeyObj.ID + "=?",
selectionArgs : [args],
}
let fetchFileResult = await mediaTest.getFileAssets(fetchOp);
let fileAsset = await fetchFileResult.getAllObject();
let fdNumber = await fileAsset[0].open('Rw');
this.testFdNumber = "fd://" + fdNumber.toString();
}
}
async audioRecorderDemo() {
// 1. Create an AudioRecorder instance.
let audioRecorder = media.createAudioRecorder();
// 2. Set the callbacks.
this.setCallBack(audioRecorder);
await this.getFd('01.mp3'); // Call the getFd method to obtain the FD address of the file to be recorded.
// 3. Set the recording parameters.
let audioRecorderConfig = {
audioEncodeBitRate : 22050,
audioSampleRate : 22050,
numberOfChannels : 2,
uri : this.testFdNumber, // testFdNumber is generated by getFd.
location : { latitude : 30, longitude : 130},
audioEncoderMime : media.CodecMimeType.AUDIO_AAC,
fileFormat : media.ContainerFormatType.CFT_MPEG_4A,
}
audioRecorder.prepare(audioRecorderConfig); // Call the prepare method to trigger the prepare event callback.
}
}
```
# Audio Recording Development
## Selecting an Audio Recording Development Mode
OpenHarmony provides multiple classes for you to develop audio recording applications. You can select them based on the recording output formats, audio usage scenarios, and even the programming language you use. Selecting a suitable class helps you reduce development workload and your application deliver a better effect.
- [AVRecorder](using-avrecorder-for-recording.md): provides ArkTS and JS APIs to implement audio and video recording. It also supports audio input, audio encoding, and media encapsulation. You can directly call device hardware, such as microphone, for recording and generate M4A audio files.
- [AudioCapturer](using-audiocapturer-for-recording.md): provides ArkTS and JS API to implement audio input. It supports only the PCM format and requires applications to continuously read audio data. The application can perform data processing after audio output. This class can be used to develop more professional and diverse recording applications. To use this class, you must have basic audio processing knowledge.
- [OpenSLES](using-opensl-es-for-recording.md): provides a set of standard, cross-platform, yet unique native audio APIs. It supports audio input in PCM format and is applicable to recording applications that are ported from other embedded platforms or that implements audio input at the native layer.
## Precautions for Developing Audio Recording Applications
The application must request the **ohos.permission.MICROPHONE** permission from the user before invoking the microphone to record audio.
For details about how to request the permission, see [Permission Application Guide](../security/accesstoken-guidelines.md). For details about how to use and manage microphones, see [Microphone Management](mic-management.md).
# Audio Recording Stream Management
An audio recording application must notice audio stream state changes and perform corresponding operations. For example, when detecting that the user stops recording, the application must notify the user that the recording finishes.
## Reading or Listening for Audio Stream State Changes in the Application
Create an AudioCapturer by referring to [Using AudioCapturer for Audio Recording](using-audiocapturer-for-recording.md) or [audio.createAudioCapturer](../reference/apis/js-apis-audio.md#audiocreateaudiocapturer8). Then obtain the audio stream state changes in either of the following ways:
- Check the [state](../reference/apis/js-apis-audio.md#attributes) of the AudioCapturer.
```ts
let audioCapturerState = audioCapturer.state;
console.info(`Current state is: ${audioCapturerState }`)
```
- Register **stateChange** to listen for state changes of the AudioCapturer.
```ts
audioCapturer.on('stateChange', (capturerState) => {
console.info(`State change to: ${capturerState}`)
});
```
The application then performs an operation, for example, displays a message indicating the end of the recording, by comparing the obtained state with [AudioState](../reference/apis/js-apis-audio.md#audiostate8).
## Reading or Listening for Changes in All Audio Streams
If an application needs to obtain the change information about all audio streams, it can use **AudioStreamManager** to read or listen for the changes of all audio streams.
> **NOTE**
>
> The audio stream change information marked as the system API can be viewed only by system applications.
The figure below shows the call relationship of audio stream management.
![Call relationship of recording stream management](figures/invoking-relationship-recording-stream-mgmt.png)
During application development, first use **getStreamManager()** to create an **AudioStreamManager** instance. Then call **on('audioCapturerChange')** to listen for audio stream changes and obtain a notification when the audio stream state or device changes. To cancel the listening for these changes, call **off('audioCapturerChange')**. You can call **getCurrentAudioCapturerInfoArray()** to obtain information such as the unique ID of the recording stream, UID of the recording stream client, and stream status.
For details about the APIs, see [AudioStreamManager](../reference/apis/js-apis-audio.md#audiostreammanager9).
## How to Develop
1. Create an **AudioStreamManager** instance.
Before using **AudioStreamManager** APIs, you must use **getStreamManager()** to create an **AudioStreamManager** instance.
```ts
import audio from '@ohos.multimedia.audio';
let audioManager = audio.getAudioManager();
let audioStreamManager = audioManager.getStreamManager();
```
2. Use **on('audioCapturerChange')** to listen for audio recording stream changes. If the application needs to receive a notification when the audio recording stream state or device changes, it can subscribe to this event.
```ts
audioStreamManager.on('audioCapturerChange', (AudioCapturerChangeInfoArray) => {
for (let i = 0; i < AudioCapturerChangeInfoArray.length; i++) {
console.info(`## CapChange on is called for element ${i} ##`);
console.info(`StreamId for ${i} is: ${AudioCapturerChangeInfoArray[i].streamId}`);
console.info(`Source for ${i} is: ${AudioCapturerChangeInfoArray[i].capturerInfo.source}`);
console.info(`Flag ${i} is: ${AudioCapturerChangeInfoArray[i].capturerInfo.capturerFlags}`);
let devDescriptor = AudioCapturerChangeInfoArray[i].deviceDescriptors;
for (let j = 0; j < AudioCapturerChangeInfoArray[i].deviceDescriptors.length; j++) {
console.info(`Id: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].id}`);
console.info(`Type: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].deviceType}`);
console.info(`Role: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].deviceRole}`);
console.info(`Name: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].name}`);
console.info(`Address: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].address}`);
console.info(`SampleRates: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].sampleRates[0]}`);
console.info(`ChannelCounts ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].channelCounts[0]}`);
console.info(`ChannelMask: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].channelMasks}`);
}
}
});
```
3. (Optional) Use **off('audioCapturerChange')** to cancel listening for audio recording stream changes.
```ts
audioStreamManager.off('audioCapturerChange');
console.info('CapturerChange Off is called');
```
4. (Optional) Call **getCurrentAudioCapturerInfoArray()** to obtain information about the current audio recording stream.
This API can be used to obtain the unique ID of the audio recording stream, UID of the audio recording client, audio status, and other information about the AudioCapturer.
> **NOTE**
>
> Before listening for state changes of all audio streams, the application must request the **ohos.permission.USE_BLUETOOTH** [permission](../security/accesstoken-guidelines.md), for the device name and device address (Bluetooth related attributes) to be displayed correctly.
```ts
async function getCurrentAudioCapturerInfoArray(){
await audioStreamManager.getCurrentAudioCapturerInfoArray().then( function (AudioCapturerChangeInfoArray) {
console.info('getCurrentAudioCapturerInfoArray Get Promise Called ');
if (AudioCapturerChangeInfoArray != null) {
for (let i = 0; i < AudioCapturerChangeInfoArray.length; i++) {
console.info(`StreamId for ${i} is: ${AudioCapturerChangeInfoArray[i].streamId}`);
console.info(`Source for ${i} is: ${AudioCapturerChangeInfoArray[i].capturerInfo.source}`);
console.info(`Flag ${i} is: ${AudioCapturerChangeInfoArray[i].capturerInfo.capturerFlags}`);
for (let j = 0; j < AudioCapturerChangeInfoArray[i].deviceDescriptors.length; j++) {
console.info(`Id: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].id}`);
console.info(`Type: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].deviceType}`);
console.info(`Role: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].deviceRole}`);
console.info(`Name: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].name}`);
console.info(`Address: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].address}`);
console.info(`SampleRates: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].sampleRates[0]}`);
console.info(`ChannelCounts ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].channelCounts[0]}`);
console.info(`ChannelMask: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].channelMasks}`);
}
}
}
}).catch((err) => {
console.error(`Invoke getCurrentAudioCapturerInfoArray failed, code is ${err.code}, message is ${err.message}`);
});
}
```
此差异已折叠。
# Audio Routing and Device Management Development
## Overview
The **AudioRoutingManager** module provides APIs for audio routing and device management. You can use the APIs to obtain the current input and output audio devices, listen for connection status changes of audio devices, and activate communication devices.
## Working Principles
The figure below shows the common APIs provided by the **AudioRoutingManager** module.
**Figure 1** Common APIs of AudioRoutingManager
![en-us_image_audio_routing_manager](figures/en-us_image_audio_routing_manager.png)
You can use these APIs to obtain the device list, subscribe to or unsubscribe from device connection status changes, activate communication devices, and obtain their activation status. For details, see [Audio Management](../reference/apis/js-apis-audio.md).
## How to Develop
For details about the APIs, see [AudioRoutingManager in Audio Management](../reference/apis/js-apis-audio.md#audioroutingmanager9).
1. Obtain an **AudioRoutingManager** instance.
Before using an API in **AudioRoutingManager**, you must use **getRoutingManager()** to obtain an **AudioRoutingManager** instance.
```js
import audio from '@ohos.multimedia.audio';
async loadAudioRoutingManager() {
var audioRoutingManager = await audio.getAudioManager().getRoutingManager();
console.info('audioRoutingManager------create-------success.');
}
```
2. (Optional) Obtain the device list and subscribe to device connection status changes.
To obtain the device list (such as input, output, distributed input, and distributed output devices) or listen for connection status changes of audio devices, refer to the following code:
```js
import audio from '@ohos.multimedia.audio';
// Obtain an AudioRoutingManager instance.
async loadAudioRoutingManager() {
var audioRoutingManager = await audio.getAudioManager().getRoutingManager();
console.info('audioRoutingManager------create-------success.');
}
// Obtain information about all audio devices. (You can set DeviceFlag as required.)
async getDevices() {
await loadAudioRoutingManager();
await audioRoutingManager.getDevices(audio.DeviceFlag.ALL_DEVICES_FLAG).then((data) => {
console.info(`getDevices success and data is: ${JSON.stringify(data)}.`);
});
}
// Subscribe to connection status changes of audio devices.
async onDeviceChange() {
await loadAudioRoutingManager();
await audioRoutingManager.on('deviceChange', audio.DeviceFlag.ALL_DEVICES_FLAG, (deviceChanged) => {
console.info('on device change type : ' + deviceChanged.type);
console.info('on device descriptor size : ' + deviceChanged.deviceDescriptors.length);
console.info('on device change descriptor : ' + deviceChanged.deviceDescriptors[0].deviceRole);
console.info('on device change descriptor : ' + deviceChanged.deviceDescriptors[0].deviceType);
});
}
// Unsubscribe from the connection status changes of audio devices.
async offDeviceChange() {
await loadAudioRoutingManager();
await audioRoutingManager.off('deviceChange', (deviceChanged) => {
console.info('off device change type : ' + deviceChanged.type);
console.info('off device descriptor size : ' + deviceChanged.deviceDescriptors.length);
console.info('off device change descriptor : ' + deviceChanged.deviceDescriptors[0].deviceRole);
console.info('off device change descriptor : ' + deviceChanged.deviceDescriptors[0].deviceType);
});
}
// Complete process: Call APIs to obtain all devices and subscribe to device changes, then manually change the connection status of a device (for example, wired headset), and finally call APIs to obtain all devices and unsubscribe from the device changes.
async test(){
await getDevices();
await onDeviceChange()();
// Manually disconnect or connect devices.
await getDevices();
await offDeviceChange();
}
```
3. (Optional) Activate a communication device and obtain its activation status.
```js
import audio from '@ohos.multimedia.audio';
// Obtain an AudioRoutingManager instance.
async loadAudioRoutingManager() {
var audioRoutingManager = await audio.getAudioManager().getRoutingManager();
console.info('audioRoutingManager------create-------success.');
}
// Activate a communication device.
async setCommunicationDevice() {
await loadAudioRoutingManager();
await audioRoutingManager.setCommunicationDevice(audio.CommunicationDeviceType.SPEAKER, true).then(() => {
console.info('setCommunicationDevice true is success.');
});
}
// Obtain the activation status of the communication device.
async isCommunicationDeviceActive() {
await loadAudioRoutingManager();
await audioRoutingManager.isCommunicationDeviceActive(audio.CommunicationDeviceType.SPEAKER).then((value) => {
console.info(`CommunicationDevice state is: ${value}.`);
});
}
// Complete process: Activate a device and obtain the activation status.
async test(){
await setCommunicationDevice();
await isCommunicationDeviceActive();
}
```
# Audio Stream Management Development
## Introduction
You can use **AudioStreamManager** to manage audio streams.
## Working Principles
The following figure shows the calling relationship of **AudioStreamManager** APIs.
**Figure 1** AudioStreamManager API calling relationship
![en-us_image_audio_stream_manager](figures/en-us_image_audio_stream_manager.png)
**NOTE**: During application development, use **getStreamManager()** to create an **AudioStreamManager** instance. Then, you can call **on('audioRendererChange')** or **on('audioCapturerChange')** to listen for status, client, and audio attribute changes of the audio playback or recording application. To cancel the listening for these changes, call **off('audioRendererChange')** or **off('audioCapturerChange')**. You can call **getCurrentAudioRendererInfoArray()** to obtain information about the audio playback application, such as the unique audio stream ID, UID of the audio playback client, and audio status. Similarly, you can call **getCurrentAudioCapturerInfoArray()** to obtain information about the audio recording application.
## How to Develop
For details about the APIs, see [AudioStreamManager](../reference/apis/js-apis-audio.md#audiostreammanager9).
1. Create an **AudioStreamManager** instance.
Before using **AudioStreamManager** APIs, you must use **getStreamManager()** to create an **AudioStreamManager** instance.
```js
var audioManager = audio.getAudioManager();
var audioStreamManager = audioManager.getStreamManager();
```
2. (Optional) Call **on('audioRendererChange')** to listen for audio renderer changes.
If an application needs to receive notifications when the audio playback application status, audio playback client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
```js
audioStreamManager.on('audioRendererChange', (AudioRendererChangeInfoArray) => {
for (let i = 0; i < AudioRendererChangeInfoArray.length; i++) {
AudioRendererChangeInfo = AudioRendererChangeInfoArray[i];
console.info('## RendererChange on is called for ' + i + ' ##');
console.info('StreamId for ' + i + ' is:' + AudioRendererChangeInfo.streamId);
console.info('ClientUid for ' + i + ' is:' + AudioRendererChangeInfo.clientUid);
console.info('Content for ' + i + ' is:' + AudioRendererChangeInfo.rendererInfo.content);
console.info('Stream for ' + i + ' is:' + AudioRendererChangeInfo.rendererInfo.usage);
console.info('Flag ' + i + ' is:' + AudioRendererChangeInfo.rendererInfo.rendererFlags);
console.info('State for ' + i + ' is:' + AudioRendererChangeInfo.rendererState);
var devDescriptor = AudioRendererChangeInfo.deviceDescriptors;
for (let j = 0; j < AudioRendererChangeInfo.deviceDescriptors.length; j++) {
console.info('Id:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].id);
console.info('Type:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].deviceType);
console.info('Role:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].deviceRole);
console.info('Name:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].name);
console.info('Address:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].address);
console.info('SampleRates:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].sampleRates[0]);
console.info('ChannelCounts' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].channelCounts[0]);
console.info('ChannelMask:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].channelMasks);
}
}
});
```
3. (Optional) Call **off('audioRendererChange')** to cancel listening for audio renderer changes.
```js
audioStreamManager.off('audioRendererChange');
console.info('######### RendererChange Off is called #########');
```
4. (Optional) Call **on('audioCapturerChange')** to listen for audio capturer changes.
If an application needs to receive notifications when the audio recording application status, audio recording client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
```js
audioStreamManager.on('audioCapturerChange', (AudioCapturerChangeInfoArray) => {
for (let i = 0; i < AudioCapturerChangeInfoArray.length; i++) {
console.info(' ## audioCapturerChange on is called for element ' + i + ' ##');
console.info('StreamId for ' + i + 'is:' + AudioCapturerChangeInfoArray[i].streamId);
console.info('ClientUid for ' + i + 'is:' + AudioCapturerChangeInfoArray[i].clientUid);
console.info('Source for ' + i + 'is:' + AudioCapturerChangeInfoArray[i].capturerInfo.source);
console.info('Flag ' + i + 'is:' + AudioCapturerChangeInfoArray[i].capturerInfo.capturerFlags);
console.info('State for ' + i + 'is:' + AudioCapturerChangeInfoArray[i].capturerState);
for (let j = 0; j < AudioCapturerChangeInfoArray[i].deviceDescriptors.length; j++) {
console.info('Id:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].id);
console.info('Type:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].deviceType);
console.info('Role:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].deviceRole);
console.info('Name:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].name);
console.info('Address:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].address);
console.info('SampleRates:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].sampleRates[0]);
console.info('ChannelCounts' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].channelCounts[0]);
console.info('ChannelMask:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].channelMasks);
}
}
});
```
5. (Optional) Call **off('audioCapturerChange')** to cancel listening for audio capturer changes.
```js
audioStreamManager.off('audioCapturerChange');
console.info('######### CapturerChange Off is called #########');
```
6. (Optional) Call **getCurrentAudioRendererInfoArray()** to obtain information about the current audio renderer.
This API can be used to obtain the unique ID of the audio stream, UID of the audio playback client, audio status, and other information about the audio player. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly.
```js
await audioStreamManager.getCurrentAudioRendererInfoArray().then( function (AudioRendererChangeInfoArray) {
console.info('######### Get Promise is called ##########');
if (AudioRendererChangeInfoArray != null) {
for (let i = 0; i < AudioRendererChangeInfoArray.length; i++) {
AudioRendererChangeInfo = AudioRendererChangeInfoArray[i];
console.info('StreamId for ' + i +' is:' + AudioRendererChangeInfo.streamId);
console.info('ClientUid for ' + i + ' is:' + AudioRendererChangeInfo.clientUid);
console.info('Content ' + i + ' is:' + AudioRendererChangeInfo.rendererInfo.content);
console.info('Stream' + i +' is:' + AudioRendererChangeInfo.rendererInfo.usage);
console.info('Flag' + i + ' is:' + AudioRendererChangeInfo.rendererInfo.rendererFlags);
console.info('State for ' + i + ' is:' + AudioRendererChangeInfo.rendererState);
var devDescriptor = AudioRendererChangeInfo.deviceDescriptors;
for (let j = 0; j < AudioRendererChangeInfo.deviceDescriptors.length; j++) {
console.info('Id:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].id);
console.info('Type:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].deviceType);
console.info('Role:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].deviceRole);
console.info('Name:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].name);
console.info('Address:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].address);
console.info('SampleRates:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].sampleRates[0]);
console.info('ChannelCounts' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].channelCounts[0]);
console.info('ChannelMask:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].channelMasks);
}
}
}
}).catch((err) => {
console.log('getCurrentAudioRendererInfoArray :ERROR: ' + err.message);
});
```
7. (Optional) Call **getCurrentAudioCapturerInfoArray()** to obtain information about the current audio capturer.
This API can be used to obtain the unique ID of the audio stream, UID of the audio recording client, audio status, and other information about the audio capturer. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly.
```js
await audioStreamManager.getCurrentAudioCapturerInfoArray().then( function (AudioCapturerChangeInfoArray) {
console.info('getCurrentAudioCapturerInfoArray: **** Get Promise Called ****');
if (AudioCapturerChangeInfoArray != null) {
for (let i = 0; i < AudioCapturerChangeInfoArray.length; i++) {
console.info('StreamId for ' + i + 'is:' + AudioCapturerChangeInfoArray[i].streamId);
console.info('ClientUid for ' + i + 'is:' + AudioCapturerChangeInfoArray[i].clientUid);
console.info('Source for ' + i + 'is:' + AudioCapturerChangeInfoArray[i].capturerInfo.source);
console.info('Flag ' + i + 'is:' + AudioCapturerChangeInfoArray[i].capturerInfo.capturerFlags);
console.info('State for ' + i + 'is:' + AudioCapturerChangeInfoArray[i].capturerState);
var devDescriptor = AudioCapturerChangeInfoArray[i].deviceDescriptors;
for (let j = 0; j < AudioCapturerChangeInfoArray[i].deviceDescriptors.length; j++) {
console.info('Id:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].id);
console.info('Type:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].deviceType);
console.info('Role:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].deviceRole);
console.info('Name:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].name)
console.info('Address:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].address);
console.info('SampleRates:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].sampleRates[0]);
console.info('ChannelCounts' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].channelCounts[0]);
console.info('ChannelMask:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].channelMasks);
}
}
}
}).catch((err) => {
console.log('getCurrentAudioCapturerInfoArray :ERROR: ' + err.message);
});
```
# Volume Management Development
## Overview
The **AudioVolumeManager** module provides APIs for volume management. You can use the APIs to obtain the volume of a stream, listen for ringer mode changes, and mute a microphone.
## Working Principles
The figure below shows the common APIs provided by the **AudioVolumeManager** module.
**Figure 1** Common APIs of AudioVolumeManager
![en-us_image_audio_volume_manager](figures/en-us_image_audio_volume_manager.png)
**AudioVolumeManager** provides the APIs for subscribing to system volume changes and obtaining the audio volume group manager (an **AudioVolumeGroupManager** instance). Before calling any API in **AudioVolumeGroupManager**, you must call **getVolumeGroupManager** to obtain an **AudioVolumeGroupManager** instance. You can use the APIs provided by **AudioVolumeGroupManager** to obtain the volume of a stream, mute a microphone, and listen for microphone state changes. For details, see [Audio Management](../reference/apis/js-apis-audio.md).
## Constraints
Before developing a microphone management application, configure the permission **ohos.permission.MICROPHONE** for the application. To set the microphone state, configure the permission **ohos.permission.MANAGE_AUDIO_CONFIG** (a system permission). For details about the permission configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop
For details about the APIs, see [AudioVolumeManager in Audio Management](../reference/apis/js-apis-audio.md#audiovolumemanager9)
1. Obtain an **AudioVolumeGroupManager** instance.
Before using an API in **AudioVolumeGroupManager**, you must use **getVolumeGroupManager()** to obtain an **AudioStreamManager** instance.
```js
import audio from '@ohos.multimedia.audio';
async loadVolumeGroupManager() {
const groupid = audio.DEFAULT_VOLUME_GROUP_ID;
var audioVolumeGroupManager = await audio.getAudioManager().getVolumeManager().getVolumeGroupManager(groupid);
console.error('audioVolumeGroupManager create success.');
}
```
2. (Optional) Obtain the volume information and ringer mode.
To obtain the volume information (such as the ringtone, voice call, media, and voice assistant) of an audio stream or obtain the ringer mode (silent, vibration, or normal) of the current device, refer to the code below. For more details, see [Audio Management](../reference/apis/js-apis-audio.md).
```js
import audio from '@ohos.multimedia.audio';
async loadVolumeGroupManager() {
const groupid = audio.DEFAULT_VOLUME_GROUP_ID;
var audioVolumeGroupManager = await audio.getAudioManager().getVolumeManager().getVolumeGroupManager(groupid);
console.info('audioVolumeGroupManager create success.');
}
// Obtain the volume of a stream. The value ranges from 0 to 15.
async getVolume() {
await loadVolumeGroupManager();
await audioVolumeGroupManager.getVolume(audio.AudioVolumeType.MEDIA).then((value) => {
console.info(`getVolume success and volume is: ${value}.`);
});
}
// Obtain the minimum volume of a stream.
async getMinVolume() {
await loadVolumeGroupManager();
await audioVolumeGroupManager.getMinVolume(audio.AudioVolumeType.MEDIA).then((value) => {
console.info(`getMinVolume success and volume is: ${value}.`);
});
}
// Obtain the maximum volume of a stream.
async getMaxVolume() {
await loadVolumeGroupManager();
await audioVolumeGroupManager.getMaxVolume(audio.AudioVolumeType.MEDIA).then((value) => {
console.info(`getMaxVolume success and volume is: ${value}.`);
});
}
// Obtain the ringer mode in use: silent (0) | vibrate (1) | normal (2).
async getRingerMode() {
await loadVolumeGroupManager();
await audioVolumeGroupManager.getRingerMode().then((value) => {
console.info(`getRingerMode success and RingerMode is: ${value}.`);
});
}
```
3. (Optional) Obtain and set the microphone state, and subscribe to microphone state changes.
To obtain and set the microphone state or subscribe to microphone state changes, refer to the following code:
```js
import audio from '@ohos.multimedia.audio';
async loadVolumeGroupManager() {
const groupid = audio.DEFAULT_VOLUME_GROUP_ID;
var audioVolumeGroupManager = await audio.getAudioManager().getVolumeManager().getVolumeGroupManager(groupid);
console.info('audioVolumeGroupManager create success.');
}
async on() { // Subscribe to microphone state changes.
await loadVolumeGroupManager();
await audioVolumeGroupManager.audioVolumeGroupManager.on('micStateChange', (micStateChange) => {
console.info(`Current microphone status is: ${micStateChange.mute} `);
});
}
async isMicrophoneMute() { // Check whether the microphone is muted.
await audioVolumeGroupManager.audioVolumeGroupManager.isMicrophoneMute().then((value) => {
console.info(`isMicrophoneMute is: ${value}.`);
});
}
async setMicrophoneMuteTrue() { // Mute the microphone.
await loadVolumeGroupManager();
await audioVolumeGroupManager.audioVolumeGroupManager.setMicrophoneMute(true).then(() => {
console.info('setMicrophoneMute to mute.');
});
}
async setMicrophoneMuteFalse() { // Unmute the microphone.
await loadVolumeGroupManager();
await audioVolumeGroupManager.audioVolumeGroupManager.setMicrophoneMute(false).then(() => {
console.info('setMicrophoneMute to not mute.');
});
}
async test(){ // Complete process: Subscribe to microphone state changes, obtain the microphone state, mute the microphone, obtain the microphone state, and unmute the microphone.
await on();
await isMicrophoneMute();
await setMicrophoneMuteTrue();
await isMicrophoneMute();
await setMicrophoneMuteFalse();
}
```
# Audio and Video Overview
You will learn how to use the audio and video APIs provided by the multimedia subsystem to develop a wealth of audio and video playback or recording scenarios. For example, you can use the **TonePlayer** class to implement simple prompt tones so that a drip sound is played upon the receipt of a new message, or use the **AVPlayer** class to develop a music player, which can loop a piece of music.
For every functionality provided by the multimedia subsystem, you will learn multiple implementation modes, each of which corresponds to a specific usage scenario. You will also learn the sub-functionalities in these scenarios. For example, in the **Audio Playback** chapter, you will learn audio concurrency policies, volume management, and output device processing methods. All these will help you develop an application with more comprehensive features.
This development guide applies only to audio and video playback and recording, which are implemented by the [@ohos.multimedia.audio](../reference/apis/js-apis-audio.md) and [@ohos.multimedia.media](../reference/apis/js-apis-media.md) modules. The UI, image processing, media storage, or other related capabilities are not covered.
## Development Description
Before developing an audio feature, especially before implementing audio data processing, you are advised to understand the following acoustic concepts. This will help you understand how the OpenHarmony APIs control the audio module and how to develop audio and video applications that are easier to use and deliver better experience.
- Audio quantization process: sampling > quantization > encoding
- Concepts related to audio quantization: analog signal, digital signal, sampling rate, audio channel, sample format, bit width, bit rate, common encoding formats (such as AAC, MP3, PCM, and WMA), and common encapsulation formats (such as WAV, MPA, FLAC, AAC, and OGG)
Before developing features related to audio and video playback, you are advised to understand the following concepts:
- Playback process: network protocol > container format > audio and video codec > graphics/audio rendering
- Network protocols: HLS, HTTP, HTTPS, and more
- Container formats: MP4, MKV, MPEG-TS, WebM, and more
- Encoding formats: H.263/H.264/H.265, MPEG4/MPEG2, and more
## Introduction to Audio Streams
An audio stream is an independent audio data processing unit that has a specific audio format and audio usage scenario information. The audio stream can be used in playback and recording scenarios, and supports independent volume adjustment and audio device routing.
The basic audio stream information is defined by [AudioStreamInfo](../reference/apis/js-apis-audio.md#audiostreaminfo8), which includes the sampling, audio channel, bit width, and encoding information. It describes the basic attributes of audio data and is mandatory for creating an audio playback or recording stream. To enable the audio module to correctly process audio data, the configured basic information must match the transmitted audio data.
### Audio Stream Usage Scenario Information
In addition to the basic information (which describes only audio data), an audio stream has usage scenario information. This is because audio streams differ in the volume, device routing, and concurrency policy. The system chooses an appropriate processing policy for an audio stream based on the usage scenario information, thereby delivering the optimal user experience.
- Playback scenario
Information about the audio playback scenario is defined by using [StreamUsage](../reference/apis/js-apis-audio.md#streamusage) and [ContentType](../reference/apis/js-apis-audio.md#contenttype).
- **StreamUsage** specifies the usage type of an audio stream, for example, used for media, voice communication, voice assistant, notification, and ringtone.
- **ContentType** specifies the content type of data in an audio stream, for example, speech, music, movie, notification tone, and ringtone.
- Recording scenario
Information about the audio stream recording scenario is defined by [SourceType](../reference/apis/js-apis-audio.md#sourcetype8).
**SourceType** specifies the recording source type of an audio stream, including the mic source, voice recognition source, and voice communication source.
## Supported Audio Formats
The APIs of the audio module support PCM encoding, including AudioRenderer, AudioCapturer, TonePlayer, and OpenSL ES.
Be familiar with the following about the audio format:
- The common audio sampling rates are supported: 8000, 11025, 12000, 16000, 22050, 24000, 32000, 44100, 48000, 64000, and 96000, in units of Hz. For details, see [AudioSamplingRate](../reference/apis/js-apis-audio.md#audiosamplingrate8).
The sampling rate varies according to the device type.
- Mono and stereo are supported. For details, see [AudioChannel](../reference/apis/js-apis-audio.md#audiochannel8).
- The following sampling formats are supported: U8 (unsigned 8-bit integer), S16LE (signed 16-bit integer, little endian), S24LE (signed 24-bit integer, little endian), S32LE (signed 32-bit integer, little endian), and F32LE (signed 32-bit floating point number, little endian). For details, see [AudioSampleFormat](../reference/apis/js-apis-audio.md#audiosampleformat8).
Due to system restrictions, only some devices support the sampling formats S24LE, S32LE, and F32LE.
Little endian means that the most significant byte is stored at the largest memory address and the least significant byte of data is stored at the smallest. This storage mode effectively combines the memory address with the bit weight of the data. Specifically, the largest memory address has a high weight, and the smallest memory address has a low weight.
The audio and video formats supported by the APIs of the media module are described in [AVPlayer and AVRecorder](avplayer-avrecorder-overview.md).
# AVPlayer and AVRecorder
The media module provides the [AVPlayer](#avplayer) and [AVRecorder](#avrecorder) class to implement audio and video playback and recording.
## AVPlayer
The AVPlayer transcodes audio and video media assets (such as MP4, MP3, MKV, and MPEG-TS) into renderable images and hearable audio analog signals, and plays the audio and video through output devices.
The AVPlayer provides the integrated playback capability. This means that your application only needs to provide streaming media sources to implement media playback. It does not need to parse or decode data.
### Audio Playback
The figure below shows the interaction when the **AVPlayer** class is used to develop a music application.
**Figure 1** Interaction with external modules for audio playback
![Audio playback interaction diagram](figures/audio-playback-interaction-diagram.png)
When a music application calls the **AVPlayer** APIs at the JS interface layer to implement audio playback, the player framework at the framework layer parses the media asset into audio data streams (in PCM format). The audio data streams are then decoded by software and output to the audio framework. The audio framework outputs the audio data streams to the audio HDI for rendering. A complete audio playback process requires the cooperation of the application, player framework, audio framework, and audio HDI.
In Figure 1, the numbers indicate the process where data is transferred to external modules.
1. The music application transfers the media asset to the **AVPlayer** instance.
2. The player framework outputs the audio PCM data streams to the audio framework, which then outputs the data streams to the audio HDI.
### Video Playback
The figure below shows the interaction when the **AVPlayer** class is used to develop a video application.
**Figure 2** Interaction with external modules for video playback
![Video playback interaction diagram](figures/video-playback-interaction-diagram.png)
When the video application calls the **AVPlayer** APIs at the JS interface layer to implement audio and video playback, the player framework at the framework layer parses the media asset into separate audio data streams and video data streams. The audio data streams are then decoded by software and output to the audio framework. The audio framework outputs the audio data streams to the audio HDI at the hardware interface layer to implement audio playback. The video data streams are then decoded by hardware (recommended) or software and output to the graphic framework. The graphic framework outputs the video data streams to the display HDI at the hardware interface layer to implement graphics rendering.
A complete video playback process requires the cooperation of the application, XComponent, player framework, graphic framework, audio framework, display HDI, and audio HDI.
In Figure 2, the numbers indicate the process where data is transferred to external modules.
1. The application obtains a window surface ID from the XComponent. For details about how to obtain the window surface ID, see [XComponent](../reference/arkui-ts/ts-basic-components-xcomponent.md).
2. The application transfers the media asset and surface ID to the **AVPlayer** instance.
3. The player framework outputs the video elementary streams (ESs) to the decoding HDI to obtain video frames (NV12/NV21/RGBA).
4. The player framework outputs the audio PCM data streams to the audio framework, which then outputs the data streams to the audio HDI.
5. The player framework outputs the video frames (NV12/NV21/RGBA) to the graphic framework, which then outputs the video frames to the display HDI.
### Supported Formats and Protocols
Audio and video containers and codecs are domains specific to content creators. You are advised to use the mainstream playback formats, rather than custom ones to avoid playback failures, frame freezing, and artifacts. The system will not be affected by incompatibility issues. If such an issue occurs, you can exit playback.
The table below lists the supported protocols.
| Scenario| Description|
| -------- | -------- |
| Local VOD| The file descriptor is supported, but the file path is not.|
| Network VoD| HTTP, HTTPS, and HLS are supported.|
The table below lists the supported audio playback formats.
| Audio Container Format| Description|
| -------- | -------- |
| M4A| Audio format: AAC|
| AAC| Audio format: AAC|
| MP3| Audio format: MP3|
| OGG| Audio format: VORBIS |
| WAV| Audio format: PCM |
> **NOTE**
>
> The supported video formats are further classified into mandatory and optional ones. All vendors must support mandatory ones and can determine whether to implement optional ones based on their service requirements. You are advised to perform compatibility processing to ensure that all the application functions are compatible on different platforms.
| Video Format| Mandatory or Not|
| -------- | -------- |
| H.264 | Yes|
| MPEG-2 | No|
| MPEG-4 | No|
| H.263 | No|
| VP8 | No|
The table below lists the supported playback formats and mainstream resolutions.
| Video Container Format| Description| Resolution|
| -------- | -------- | -------- |
| MP4| Video formats: H.264, MPEG-2, MPEG-4, and H.263<br>Audio formats: AAC and MP3| Mainstream resolutions, such as 4K, 1080p, 720p, 480p, and 270p|
| MKV| Video formats: H.264, MPEG-2, MPEG-4, and H.263<br>Audio formats: AAC and MP3| Mainstream resolutions, such as 4K, 1080p, 720p, 480p, and 270p|
| TS| Video formats: H.264, MPEG-2, and MPEG-4<br>Audio formats: AAC and MP3| Mainstream resolutions, such as 4K, 1080p, 720p, 480p, and 270p|
| WebM| Video format: VP8<br>Audio format: VORBIS| Mainstream resolutions, such as 4K, 1080p, 720p, 480p, and 270p|
## AVRecorder
The AVRecorder captures audio signals, receives video signals, encodes the audio and video signals, and saves them to files. With the AVRecorder, you can easily implement audio and video recording, including starting, pausing, resuming, and stopping recording, and releasing resources. You can also specify parameters such as the encoding format, encapsulation format, and file path for recording.
**Figure 3** Interaction with external modules for video recording
![Video recording interaction diagram](figures/video-recording-interaction-diagram.png)
- Audio recording: When an application calls the **AVRecorder** APIs at the JS interface layer to implement audio recording, the player framework at the framework layer invokes the audio framework to capture audio data through the audio HDI. The audio data is then encoded by software and saved into a file.
- Video recording: When an application calls the **AVRecorder** APIs at the JS interface layer to implement video recording, the camera framework is first invoked to capture image data. Through the video encoding HDI, the camera framework sends the data to the player framework at the framework layer. The player framework encodes the image data through the video HDI and saves the encoded image data into a file.
With the AVRecorder, you can implement pure audio recording, pure video recording, and audio and video recording.
In Figure 3, the numbers indicate the process where data is transferred to external modules.
1. The application obtains a surface ID from the player framework through the **AVRecorder** instance.
2. The application sets the surface ID for the camera framework, which obtains the surface corresponding to the surface ID. The camera framework captures image data through the video HDI and sends the data to the player framework at the framework layer.
3. The camera framework transfers the video data to the player framework through the surface.
4. The player framework encodes video data through the video HDI.
5. The player framework sets the audio parameters for the audio framework and obtains the audio data from the audio framework.
### Supported Formats
The table below lists the supported audio sources.
| Type| Description|
| -------- | -------- |
| mic | The system microphone is used as the audio source input.|
The table below lists the supported video sources.
| Type| Description |
| -------- | -------- |
| surface_yuv | The input surface carries raw data.|
| surface_es | The input surface carries ES data.|
The table below lists the supported audio and video encoding formats.
| Encoding Format| Description |
| -------- | -------- |
| audio/mp4a-latm | Audio encoding format MP4A-LATM.|
| video/mp4v-es | Video encoding format MPEG-4.|
| video/avc | Video encoding format AVC.|
The table below lists the supported output file formats.
| Format| Description |
| -------- | -------- |
| MP4| Video container format MP4.|
| M4A| Audio container format M4A.|
此差异已折叠。
此差异已折叠。
# AVSession Overview
> **NOTE**
>
> All APIs of the **AVSession** module are system APIs and can be called only by system applications.
The Audio and Video Session (AVSession) service is used to manage the playback behavior of all audio and video applications in the system in a unified manner. For example, it allows only one audio application in the playing state.
## Overview
Audio and video applications access the AVSession service and send application data (for example, a song that is being played and playback state) to it. Through a controller, the user can choose another application or device to continue the playback. If an application does not access the AVSession service, its playback will be forcibly interrupted when it switches to the background.
AVSession, short for audio and video session, is also known as media session.
- Application developers can use the APIs provided by the **AVSession** module to connect their audio and video applications to the system's Media Controller.
- System developers can use the APIs provided by the **AVSession** module to display media information of system audio and video applications and carry out unified playback control.
To implement background playback, you must request a continuous task to prevent the task from being suspended. For details, see [Continuous Task Development](../task-management/continuous-task-dev-guide.md).
You can implement the following features through the **AVSession** module:
## Basic Concepts
1. Unified playback control entry
Be familiar with the following basic concepts before development:
If there are multiple audio and video applications on the device, users need to switch to and access different applications to control media playback. With AVSession, a unified playback control entry of the system (such as Media Controller) is used for playback control of these audio and video applications. No more switching is required.
- AVSession
2. Better background application management
For AVSession, one end is the audio and video applications under control, and the other end is a controller (for example, Media Controller or AI Voice). AVSession provides a channel for information exchange between the application and controller.
When an application running in the background automatically starts audio playback, it is difficult for users to locate the application. With AVSession, users can quickly find the application that plays the audio clip in Media Controller.
- Provider
## Basic Concepts
An audio and video application that accesses the AVSession service. After accessing AVSession, the audio and video application must provide the media information, for example, the name of the item to play and the playback state, to AVSession. Through AVSession, the application also receives control commands from the controller and responds accordingly.
- AVSession
- Controller
A system application that accesses AVSession to provide global control on audio and video playback behavior. Typical controllers on OpenHarmony devices are Media Controller and AI Voice. The following sections use Media Controller as an example of the controller. After accessing AVSession, the controller obtains the latest media information and sends control commands to the audio and video applications through AVSession.
A channel used for information exchange between applications and Media Controller. For AVSession, one end is the media application under control, and the other end is Media Controller. Through AVSession, an application can transfer the media playback information to Media Controller and receive control commands from Media Controller.
- AVSessionController
Object that controls media sessions and thereby controls the playback behavior of applications. Through AVSessionController, Media Controller can control the playback behavior of applications, obtain playback information, and send control commands. It can also monitor the playback state of applications to ensure synchronization of the media session information.
An object that controls the playback behavior of the provider. It obtains the playback information of the audio and video application and listens for the application playback changes to synchronize the AVSession information between the application and controller. The controller is the holder of an **AVSessionController** object.
- AVSessionManager
An object that provides the capability of managing sessions. It can create an **AVSession** object, create an **AVSessionController** object, send control commands, and listen for session state changes.
- Media Controller
Holder of AVSessionController. Through AVSessionController, Media Controller sends commands to control media playback of applications.
## AVSession Interaction Process
## Implementation Principle
AVSessions are classified into local AVSessions and distributed AVSessions.
The **AVSession** module provides two classes: **AVSession** and **AVSessionController**.
![AVSession Interaction Process](figures/avsession-interaction-process.png)
**Figure 1** AVSession interaction
- Local AVSession
![en-us_image_avsession](figures/en-us_image_avsession.png)
Local AVSession establishes a connection between the provider and controller in the local device, so as to implement unified playback control and media information display for audio and video applications in the system.
- Interaction between the application and Media Controller: First, an audio application creates an **AVSession** object and sets session information, including media metadata, launcher ability, and playback state information. Then, Media Controller creates an **AVSessionController** object to obtain session-related information and send the 'play' command to the audio application. Finally, the audio application responds to the command and updates the playback state.
- Distributed AVSession
- Distributed projection: When a connected device creates a local session, Media Controller or the audio application can select another device to be projected based on the device list, synchronize the local session to the remote device, and generate a controllable remote session. The remote session is controlled by sending control commands to the remote device's application through its AVSessionController.
Distributed AVSession establishes a connection between the provider and controller in the cross-device scenario, so as to implement cross-device playback control and media information display for audio and video applications in the system. For example, you can project the content played on device A to device B and perform playback control on device B.
## Constraints
- The playback information displayed in Media Controller is the media information proactively written by the media application to AVSession.
- Media Controller controls the playback of a media application based on the responses of the media application to control commands.
- AVSession can transmit media playback information and control commands. It does not display information or execute control commands.
- Do not develop Media Controller for common applications. For common audio and video applications running on OpenHarmony, the default control end is Media Controller, which is a system application. You do not need to carry out additional development for Media Controller.
- If you want to develop your own system running OpenHarmony, you can develop your own Media Controller.
- For better background management of audio and video applications, the **AVSession** module enforces background control for applications. Only applications that have accessed AVSession can play audio in the background. Otherwise, the system forcibly pauses the playback when an application switches to the background.
The AVSession service manages the playback behavior of all audio and video applications in the system. To continue the playback after switching to the background, the audio and video applications must access the AVSession service.
# Device Input Management
Before developing a camera application, you must create an independent camera object. The application invokes and controls the camera object to perform basic operations such as preview, photographing, and video recording.
## How to Develop
Read [Camera](../reference/apis/js-apis-camera.md) for the API reference.
1. Import the camera module, which provides camera-related attributes and methods.
```ts
import camera from '@ohos.multimedia.camera';
```
2. Call **getCameraManager()** to obtain a **CameraManager** object.
```ts
let cameraManager;
let context: any = getContext(this);
cameraManager = camera.getCameraManager(context)
```
> **NOTE**
>
> If obtaining the object fails, the camera hardware may be occupied or unusable. If it is occupied, wait until it is released.
3. Call **getSupportedCameras()** in the **CameraManager** class to obtain the list of cameras supported by the current device. The list stores the IDs of all cameras supported. If the list is not empty, each ID in the list can be used to create an independent camera object. Otherwise, no camera is available for the current device and subsequent operations cannot be performed.
```ts
let cameraArray = cameraManager.getSupportedCameras();
if (cameraArray.length <= 0) {
console.error("cameraManager.getSupportedCameras error");
return;
}
for (let index = 0; index < cameraArray.length; index++) {
console.info('cameraId : ' + cameraArray[index].cameraId); // Obtain the camera ID.
console.info('cameraPosition : ' + cameraArray[index].cameraPosition); // Obtain the camera position.
console.info('cameraType : ' + cameraArray[index].cameraType); // Obtain the camera type.
console.info('connectionType : ' + cameraArray[index].connectionType); // Obtain the camera connection type.
}
```
4. Call **getSupportedOutputCapability()** to obtain all output streams supported by the current device, such as preview streams and photo streams. The output stream is in each **profile** field under **CameraOutputCapability**.
```ts
// Create a camera input stream.
let cameraInput;
try {
cameraInput = cameraManager.createCameraInput(cameraArray[0]);
} catch (error) {
console.error('Failed to createCameraInput errorCode = ' + error.code);
}
// Listen for CameraInput errors.
let cameraDevice = cameraArray[0];
cameraInput.on('error', cameraDevice, (error) => {
console.info(`Camera input error code: ${error.code}`);
})
// Open the camera.
await cameraInput.open();
// Obtain the output stream capabilities supported by the camera.
let cameraOutputCapability = cameraManager.getSupportedOutputCapability(cameraArray[0]);
if (!cameraOutputCapability) {
console.error("cameraManager.getSupportedOutputCapability error");
return;
}
console.info("outputCapability: " + JSON.stringify(cameraOutputCapability));
```
## Status Listening
During camera application development, you can listen for the camera status, including the appearance of a new camera, removal of a camera, and availability of a camera. The camera ID and camera status are used in the callback function. When a new camera appears, the new camera can be added to the supported camera list.
Register the 'cameraStatus' event and return the listening result through a callback, which carries the **CameraStatusInfo** parameter. For details about the parameter, see [CameraStatusInfo](../reference/apis/js-apis-camera.md#camerastatusinfo).
```ts
cameraManager.on('cameraStatus', (cameraStatusInfo) => {
console.info(`camera: ${cameraStatusInfo.camera.cameraId}`);
console.info(`status: ${cameraStatusInfo.status}`);
})
```
# Camera Metadata
Metadata is the description and context of image information returned by the camera application. It provides detailed data for the image information, for example, coordinates of a viewfinder frame for identifying a portrait in a photo or a video.
Metadata uses a tag (key) to find the corresponding data during the transfer of parameters and configurations, reducing memory copy operations.
## How to Develop
Read [Camera](../reference/apis/js-apis-camera.md) for the API reference.
1. Obtain the metadata types supported by the current device from **supportedMetadataObjectTypes** in **CameraOutputCapability**, and then use **createMetadataOutput()** to create a metadata output stream.
```ts
let metadataObjectTypes = cameraOutputCapability.supportedMetadataObjectTypes;
let metadataOutput;
try {
metadataOutput = cameraManager.createMetadataOutput(metadataObjectTypes);
} catch (error) {
// If the operation fails, error.code is returned and processed.
console.info(error.code);
}
```
2. Call **start()** to start outputting metadata. If the call fails, an error code is returned. For details, see [Camera Error Codes](../reference/apis/js-apis-camera.md#cameraerrorcode).
```ts
metadataOutput.start().then(() => {
console.info('Callback returned with metadataOutput started.');
}).catch((err) => {
console.info('Failed to metadataOutput start '+ err.code);
});
```
3. Call **stop()** to stop outputting metadata. If the call fails, an error code is returned. For details, see [Camera Error Codes](../reference/apis/js-apis-camera.md#cameraerrorcode).
```ts
metadataOutput.stop().then(() => {
console.info('Callback returned with metadataOutput stopped.');
}).catch((err) => {
console.info('Failed to metadataOutput stop '+ err.code);
});
```
## Status Listening
During camera application development, you can listen for the status of metadata objects and output stream.
- Register the 'metadataObjectsAvailable' event to listen for metadata objects that are available. When a valid metadata object is detected, the callback function returns the metadata. This event can be registered when a **MetadataOutput** object is created.
```ts
metadataOutput.on('metadataObjectsAvailable', (metadataObjectArr) => {
console.info(`metadata output metadataObjectsAvailable`);
})
```
> **NOTE**
>
> Currently, only **FACE_DETECTION** is available for the metadata type. The metadata object is the rectangle of the recognized face, including the x-axis coordinate and y-axis coordinate of the upper left corner of the rectangle as well as the width and height of the rectangle.
- Register the 'error' event to listen for metadata stream errors. The callback function returns an error code when an API is incorrectly used. For details about the error code types, see [Camera Error Codes](../reference/apis/js-apis-camera.md#cameraerrorcode).
```ts
metadataOutput.on('error', (metadataOutputError) => {
console.info(`Metadata output error code: ${metadataOutputError.code}`);
})
```
# Camera Overview
With the APIs provided by the camera module of the multimedia subsystem, you can develop a camera application. The application accesses and operates the camera hardware to implement basic operations, such as preview, photographing, and video recording. It can also perform more operations, for example, controlling the flash and exposure time, and focusing or adjusting the focus.
## Development Model
The camera application invokes the camera hardware to collect and process image and video data, and output images and videos. It can be used when there are multiple lenses (such as wide-angle lens, long-focus lens, and ToF lens) in various service scenarios (such as different requirements on the resolution, format, and effect).
The figure below illustrates the working process of the camera module. The working process can be summarized into three parts: input device management, session management, and output management.
- During input device management, the camera application invokes the camera hardware to collect data and uses the data as an input stream.
- During session management, you can configure an input stream to determine the camera to be used. You can also set parameters, such as the flash, exposure time, focus, and focus adjustment, to implement different shooting effects in various service scenarios. The application can switch between sessions to meet service requirements in different scenarios.
- During output management, you can configure an output stream, which can be a preview stream, photo stream, or video stream.
**Figure 1** Camera working process
![Camera Workflow](figures/camera-workflow.png)
For better application development, you are also advised understanding the camera development model.
**Figure 2** Camera development model
![Camera Development Model](figures/camera-development-model.png)
The camera application controls the camera hardware to implement basic operations such as image display (preview), photo saving (photographing), and video recording. During the implementation, the camera service controls the camera hardware to collect and output data, and transmits the data to a specific module for processing through a BufferQueue at the bottom camera device hardware interface (HDI) layer. You can ignore the BufferQueue during application development. It is used to send the data processed by the bottom layer to the upper layer for image display.
For example, in a video recording scenario, the recording service creates a video surface and provides it to the camera service for data transmission. The camera service controls the camera device to collect video data and generate a video stream. After processing the collected data at the HDI layer, the camera service transmits the video stream to the recording service through the surface. The recording service processes the video stream and saves it as a video file. Now video recording is complete.
# Camera Development Preparations
The main process of camera application development includes development preparations, device input management, session management, preview, photographing, and video recording.
Before developing a camera application, you must request camera-related permissions (as described in the table below) to ensure that the application has the permission to access the camera hardware and other services. Before requesting the permission, ensure that the [basic principles for permission management](../security/accesstoken-overview.md#basic-principles-for-permission-management) are met.
| Permission| Description| Authorization Mode|
| -------- | -------- | -------- |
| ohos.permission.CAMERA | Allows an application to use the camera to take photos and record videos.| user_grant |
| ohos.permission.MICROPHONE | Allows an application to access the microphone.<br>This permission is required only if the application is used to record audio.| user_grant |
| ohos.permission.WRITE_MEDIA | Allows an application to read media files from and write media files into the user's external storage. This permission is optional.| user_grant |
| ohos.permission.READ_MEDIA | Allows an application to read media files from the user's external storage. This permission is optional.| user_grant |
| ohos.permission.MEDIA_LOCATION | Allows an application to access geographical locations in the user's media file. This permission is optional.| user_grant |
After configuring the permissions in the **module.json5** file, the application must call [abilityAccessCtrl.requestPermissionsFromUser](../reference/apis/js-apis-abilityAccessCtrl.md#requestpermissionsfromuser9) to check whether the required permissions are granted. If not, request the permissions from the user by displaying a dialog box.
For details about how to request and verify the permissions, see [Permission Application Guide](../security/accesstoken-guidelines.md).
> **NOTE**
>
> Even if the user has granted a permission, the application must check for the permission before calling an API protected by the permission. It should not persist the permission granted status, because the user can revoke the permission through the system application **Settings**.
# Camera Preview
Preview is the image you see after you start the camera application but before you take photos or record videos.
## How to Develop
Read [Camera](../reference/apis/js-apis-camera.md) for the API reference.
1. Create a surface.
The XComponent, the capabilities of which are provided by the UI, offers the surface for preview streams. For details, see [XComponent](../reference/arkui-ts/ts-basic-components-xcomponent.md).
```ts
// Create an XComponentController object.
mXComponentController: XComponentController = new XComponentController;
build() {
Flex() {
// Create an XComponent.
XComponent({
id: '',
type: 'surface',
libraryname: '',
controller: this.mXComponentController
})
.onLoad(() => {
// Set the surface width and height (1920 x 1080). For details about how to set the preview size, see the preview resolutions supported by the current device, which are obtained from previewProfilesArray.
this.mXComponentController.setXComponentSurfaceSize({surfaceWidth:1920,surfaceHeight:1080});
// Obtain the surface ID.
globalThis.surfaceId = this.mXComponentController.getXComponentSurfaceId();
})
.width('1920px')
.height('1080px')
}
}
```
2. Call **previewProfiles()** in the **CameraOutputCapability** class to obtain the preview capabilities, in the format of an **previewProfilesArray** array, supported by the current device. Then call **createPreviewOutput()** to create a preview output stream, with the first parameter set to the first item in the **previewProfilesArray** array and the second parameter set to the surface ID obtained in step 1.
```ts
let previewProfilesArray = cameraOutputCapability.previewProfiles;
let previewOutput;
try {
previewOutput = cameraManager.createPreviewOutput(previewProfilesArray[0], surfaceId);
}
catch (error) {
console.error("Failed to create the PreviewOutput instance." + error);
}
```
3. Call **start()** to start outputting the preview stream. If the call fails, an error code is returned. For details, see [Camera Error Codes](../reference/apis/js-apis-camera.md#cameraerrorcode).
```ts
previewOutput.start().then(() => {
console.info('Callback returned with previewOutput started.');
}).catch((err) => {
console.info('Failed to previewOutput start '+ err.code);
});
```
## Status Listening
During camera application development, you can listen for the preview output stream status, including preview stream start, preview stream end, and preview stream output errors.
- Register the 'frameStart' event to listen for preview start events This event can be registered when a **PreviewOutput** object is created and is triggered when the bottom layer starts exposure for the first time. The preview stream is started as long as a result is returned.
```ts
previewOutput.on('frameStart', () => {
console.info('Preview frame started');
})
```
- Register the 'frameEnd' event to listen for preview end events. This event can be registered when a **PreviewOutput** object is created and is triggered when the last frame of preview ends. The preview stream ends as long as a result is returned.
```ts
previewOutput.on('frameEnd', () => {
console.info('Preview frame ended');
})
```
- Register the 'error' event to listen for preview output errors. The callback function returns an error code when an API is incorrectly used. For details about the error code types, see [Camera Error Codes](../reference/apis/js-apis-camera.md#cameraerrorcode).
```ts
previewOutput.on('error', (previewOutputError) => {
console.info(`Preview output error code: ${previewOutputError.code}`);
})
```
此差异已折叠。
此差异已折叠。
# Camera Session Management
Before using the camera application for preview, photographing, video recording, and metadata, you must create a camera session.
You can implement the following functions in the session:
- Configure the camera input and output streams. This is mandatory for photographing.
Configuring an input stream is to add a device input, which means that the user selects a camera for photographing. Configuring an output stream is to select a data output mode. For example, to implement photographing, you must configure both the preview stream and photo stream as the output stream. The data of the preview stream is displayed on the XComponent, and that of the photo stream is saved to the Gallery application through the **ImageReceiver** API.
- Perform more operations on the camera hardware. For example, add the flash and adjust the focal length. For details about the supported configurations and APIs, see [Camera API Reference](../reference/apis/js-apis-camera.md).
- Control session switching. The application can switch the camera mode by removing and adding output streams. For example, to switch from photographing to video recording, the application must remove the photo output stream and add the video output stream.
After the session configuration is complete, the application must commit the configuration and start the session before using the camera functionalities.
## How to Develop
1. Call **createCaptureSession()** in the **CameraManager** class to create a session.
```ts
let captureSession;
try {
captureSession = cameraManager.createCaptureSession();
} catch (error) {
console.error('Failed to create the CaptureSession instance. errorCode = ' + error.code);
}
```
2. Call **beginConfig()** in the **CaptureSession** class to start configuration for the session.
```ts
try {
captureSession.beginConfig();
} catch (error) {
console.error('Failed to beginConfig. errorCode = ' + error.code);
}
```
3. Configure the session. You can call **addInput()** and **addOutput()** in the **CaptureSession** class to add the input and output streams to the session, respectively. The code snippet below uses adding the preview stream **previewOutput** and photo stream **photoOutput** as an example to implement the photographing and preview mode.
After the configuration, call **commitConfig()** and **start()** in the **CaptureSession** class in sequence to commit the configuration and start the session.
```ts
try {
captureSession.addInput(cameraInput);
} catch (error) {
console.error('Failed to addInput. errorCode = ' + error.code);
}
try {
captureSession.addOutput(previewOutput);
} catch (error) {
console.error('Failed to addOutput(previewOutput). errorCode = ' + error.code);
}
try {
captureSession.addOutput(photoOutput);
} catch (error) {
console.error('Failed to addOutput(photoOutput). errorCode = ' + error.code);
}
await captureSession.commitConfig() ;
await captureSession.start().then(() => {
console.info('Promise returned to indicate the session start success.');
})
```
4. Control the session. You can call **stop()** in the **CaptureSession** class to stop the session, and call **removeOutput()** and **addOutput()** in this class to switch to another session. The code snippet below uses removing the photo stream **photoOutput** and adding the video stream **videoOutput** as an example to complete the switching from photographing to recording.
```ts
await captureSession.stop();
try {
captureSession.beginConfig();
} catch (error) {
console.error('Failed to beginConfig. errorCode = ' + error.code);
}
// Remove the photo output stream from the session.
try {
captureSession.removeOutput(photoOutput);
} catch (error) {
console.error('Failed to removeOutput(photoOutput). errorCode = ' + error.code);
}
// Add the video output stream to the session.
try {
captureSession.addOutput(videoOutput);
} catch (error) {
console.error('Failed to addOutput(videoOutput). errorCode = ' + error.code);
}
```
此差异已折叠。
此差异已折叠。
# Distributed AVSession Overview
With distributed AVSession, OpenHarmony allows users to project locally played media to a distributed device for a better playback effect. For example, users can project audio played on a tablet to a smart speaker.
After the user initiates a projection, the media information is synchronized to the distributed device in real time, and the user can control the playback (for example, previous, next, play, and pause) on the distributed device. From the perspective of the user, the playback control operation on the distributed device is the same as that on the local device.
## Interaction Process
After the local device is paired with a distributed device, the controller on the local device projects media to the distributed device through AVSessionManager, thereby implementing a distributed AVSession. The interaction process is shown below.
![Distributed AVSession Interaction Process](figures/distributed-avsession-interaction-process.png)
The AVSession service on the distributed device automatically creates an **AVSession** object for information synchronization with the local device. The information to synchronize includes the session information, control commands, and events.
## Distributed AVSession Process
After the user triggers a projection, the remote device automatically creates an **AVSession** object to associate it with that on the local device. The detailed process is as follows:
1. After receiving an audio device switching command, the AVSession service on the local device synchronizes the session information to the distributed device.
2. The controller (for example, Media Controller) on the distributed device detects the new **AVSession** object and creates an **AVSessionController** object for it.
3. Through the **AVSessionController** object, the controller on the distributed device sends a control command to the **AVSession** object on the local device.
4. Upon the receipt of the control command, the **AVSession** object on the local device triggers a callback to the local audio application.
5. The **AVSession** object on the local device synchronizes the new session information to the controller on the distributed device in real time.
6. When the remote device is disconnected, the audio stream is switched back to the local device and the playback is paused. (The audio module completes the switchback, and the AVSession service instructs the application to pause the playback.)
## Distributed AVSession Scenarios
There are two scenarios for projection implemented using the distributed AVSession:
- System projection: The controller (for example, Media Controller) initiates a projection.
This type of projection takes effect for all applications. After a system projection, all audios on the local device are played from the distributed device by default.
- Application projection: An audio and video application integrates the projection component to initiate a projection. (This scenario is not supported yet.)
This type of projection takes effect for a single application. After an application projection, audio of the application on the local device is played from the distributed device, and audio of other applications is still played from the local device.
Projection preemption is supported. If application A initiates a projection to a remote device and then application B initiates a projection to the same device, then audio of application B is played on the remote device.
## Relationship Between Distributed AVSession and Distributed Audio Playback
The internal logic for the distributed AVSession to implement projection is as follows:
- API related to [distributed audio playback](distributed-audio-playback.md) are called to project audio streams to the distributed device.
- The distributed capability is used to project the session metadata to the distributed device for display.
Projection implemented by using the distributed AVSession not only enables audio to be played on the distributed device, but also enables media information to be displayed on the distributed device. It also allows the user to perform playback control on the distributed device.
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册