提交 7950558b 编写于 作者: G Gloria

Update docs against 10659+11222+11309+11055+10802+10776+10846+11409

Signed-off-by: wusongqing<wusongqing@huawei.com>
上级 91f37c7d
# Audio Capture Development # Audio Capture Development
## When to Use ## Introduction
You can use the APIs provided by **AudioCapturer** to record raw audio files. You can use the APIs provided by **AudioCapturer** to record raw audio files, thereby implementing audio data collection.
### State Check **Status check**: During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioCapturer** instance. This is because some operations can be performed only when the audio capturer is in a given state. If the application performs an operation when the audio capturer is not in the given state, the system may throw an exception or generate other undefined behavior.
During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioCapturer** instance. This is because some operations can be performed only when the audio capturer is in a given state. If the application performs an operation when the audio capturer is not in the given state, the system may throw an exception or generate other undefined behavior. ## Working Principles
For details about the APIs, see [AudioCapturer in Audio Management](../reference/apis/js-apis-audio.md#audiocapturer8). This following figure shows the audio capturer state transitions.
**Figure 1** Audio capturer state transitions
![audio-capturer-state](figures/audio-capturer-state.png)
- **PREPARED**: The audio capturer enters this state by calling **create()**.
- **RUNNING**: The audio capturer enters this state by calling **start()** when it is in the **PREPARED** state or by calling **start()** when it is in the **STOPPED** state.
- **STOPPED**: The audio capturer in the **RUNNING** state can call **stop()** to stop playing audio data.
- **RELEASED**: The audio capturer in the **PREPARED** or **STOPPED** state can use **release()** to release all occupied hardware and software resources. It will not transit to any other state after it enters the **RELEASED** state.
**Figure 1** Audio capturer state ## Constraints
![](figures/audio-capturer-state.png) Before developing the audio data collection feature, configure the **ohos.permission.MICROPHONE** permission for your application. For details about permission configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md).
## How to Develop ## How to Develop
For details about the APIs, see [AudioCapturer in Audio Management](../reference/apis/js-apis-audio.md#audiocapturer8).
1. Use **createAudioCapturer()** to create an **AudioCapturer** instance. 1. Use **createAudioCapturer()** to create an **AudioCapturer** instance.
Set parameters of the **AudioCapturer** instance in **audioCapturerOptions**. This instance is used to capture audio, control and obtain the recording status, and register a callback for notification. Set parameters of the **AudioCapturer** instance in **audioCapturerOptions**. This instance is used to capture audio, control and obtain the recording state, and register a callback for notification.
```js ```js
var audioStreamInfo = { import audio from '@ohos.multimedia.audio';
let audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100, samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1, channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
} }
var audioCapturerInfo = { let audioCapturerInfo = {
source: audio.SourceType.SOURCE_TYPE_MIC, source: audio.SourceType.SOURCE_TYPE_MIC,
capturerFlags: 1 capturerFlags: 0 // 0 is the extended flag bit of the audio capturer. The default value is 0.
} }
var audioCapturerOptions = { let audioCapturerOptions = {
streamInfo: audioStreamInfo, streamInfo: audioStreamInfo,
capturerInfo: audioCapturerInfo capturerInfo: audioCapturerInfo
} }
let audioCapturer = await audio.createAudioCapturer(audioCapturerOptions); let audioCapturer = await audio.createAudioCapturer(audioCapturerOptions);
var state = audioRenderer.state; console.log('AudioRecLog: Create audio capturer success.');
```
2. (Optional) Use **on('stateChange')** to subscribe to audio renderer state changes.
If an application needs to perform some operations when the audio renderer state is updated, the application can subscribe to the state changes. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
```js
audioCapturer.on('stateChange',(state) => {
console.info('AudioCapturerLog: Changed State to : ' + state)
switch (state) {
case audio.AudioState.STATE_PREPARED:
console.info('--------CHANGE IN AUDIO STATE----------PREPARED--------------');
console.info('Audio State is : Prepared');
break;
case audio.AudioState.STATE_RUNNING:
console.info('--------CHANGE IN AUDIO STATE----------RUNNING--------------');
console.info('Audio State is : Running');
break;
case audio.AudioState.STATE_STOPPED:
console.info('--------CHANGE IN AUDIO STATE----------STOPPED--------------');
console.info('Audio State is : stopped');
break;
case audio.AudioState.STATE_RELEASED:
console.info('--------CHANGE IN AUDIO STATE----------RELEASED--------------');
console.info('Audio State is : released');
break;
default:
console.info('--------CHANGE IN AUDIO STATE----------INVALID--------------');
console.info('Audio State is : invalid');
break;
}
});
``` ```
3. Use **start()** to start audio recording. 2. Use **start()** to start audio recording.
The capturer state will be **STATE_RUNNING** once the audio capturer is started. The application can then begin reading buffers. The capturer state will be **STATE_RUNNING** once the audio capturer is started. The application can then begin reading buffers.
```js ```js
import audio from '@ohos.multimedia.audio';
async function startCapturer() {
let state = audioCapturer.state;
// The audio capturer should be in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state after being started.
if (state != audio.AudioState.STATE_PREPARED || state != audio.AudioState.STATE_PAUSED ||
state != audio.AudioState.STATE_STOPPED) {
console.info('Capturer is not in a correct state to start');
return;
}
await audioCapturer.start(); await audioCapturer.start();
if (audioCapturer.state == audio.AudioState.STATE_RUNNING) {
let state = audioCapturer.state;
if (state == audio.AudioState.STATE_RUNNING) {
console.info('AudioRecLog: Capturer started'); console.info('AudioRecLog: Capturer started');
} else { } else {
console.info('AudioRecLog: Capturer start failed'); console.error('AudioRecLog: Capturer start failed');
}
} }
``` ```
4. Use **getBufferSize()** to obtain the minimum buffer size to read. 3. Read the captured audio data and convert it to a byte stream. Call **read()** repeatedly to read the data until the application stops the recording.
```js
var bufferSize = await audioCapturer.getBufferSize();
console.info('AudioRecLog: buffer size: ' + bufferSize);
```
5. Read the captured audio data and convert it to a byte stream. Call **read()** repeatedly to read the data until the application wants to stop the recording.
The following example shows how to write recorded data into a file. The following example shows how to write recorded data into a file.
```js ```js
import fileio from '@ohos.fileio'; import fileio from '@ohos.fileio';
const path = '/data/data/.pulse_dir/capture_js.wav'; let state = audioCapturer.state;
// The read operation can be performed only when the state is STATE_RUNNING.
if (state != audio.AudioState.STATE_RUNNING) {
console.info('Capturer is not in a correct state to read');
return;
}
const path = '/data/data/.pulse_dir/capture_js.wav'; // Path for storing the collected audio file.
let fd = fileio.openSync(path, 0o102, 0o777); let fd = fileio.openSync(path, 0o102, 0o777);
if (fd !== null) { if (fd !== null) {
console.info('AudioRecLog: file fd created'); console.info('AudioRecLog: file fd created');
...@@ -115,38 +110,140 @@ If an application needs to perform some operations when the audio renderer state ...@@ -115,38 +110,140 @@ If an application needs to perform some operations when the audio renderer state
console.info('AudioRecLog: file fd opened in append mode'); console.info('AudioRecLog: file fd opened in append mode');
} }
var numBuffersToCapture = 150; let numBuffersToCapture = 150; // Write data for 150 times.
while (numBuffersToCapture) { while (numBuffersToCapture) {
var buffer = await audioCapturer.read(bufferSize, true); let buffer = await audioCapturer.read(bufferSize, true);
if (typeof(buffer) == undefined) { if (typeof(buffer) == undefined) {
console.info('read buffer failed'); console.info('AudioRecLog: read buffer failed');
} else { } else {
var number = fileio.writeSync(fd, buffer); let number = fileio.writeSync(fd, buffer);
console.info('AudioRecLog: data written: ' + number); console.info(`AudioRecLog: data written: ${number}`);
} }
numBuffersToCapture--; numBuffersToCapture--;
} }
``` ```
6. Once the recording is complete, call **stop()** to stop the recording. 4. Once the recording is complete, call **stop()** to stop the recording.
```js
async function StopCapturer() {
let state = audioCapturer.state;
// The audio capturer can be stopped only when it is in STATE_RUNNING or STATE_PAUSED state.
if (state != audio.AudioState.STATE_RUNNING && state != audio.AudioState.STATE_PAUSED) {
console.info('AudioRecLog: Capturer is not running or paused');
return;
}
```
await audioCapturer.stop(); await audioCapturer.stop();
if (audioCapturer.state == audio.AudioState.STATE_STOPPED) {
state = audioCapturer.state;
if (state == audio.AudioState.STATE_STOPPED) {
console.info('AudioRecLog: Capturer stopped'); console.info('AudioRecLog: Capturer stopped');
} else { } else {
console.info('AudioRecLog: Capturer stop failed'); console.error('AudioRecLog: Capturer stop failed');
}
} }
``` ```
7. After the task is complete, call **release()** to release related resources. 5. After the task is complete, call **release()** to release related resources.
```js ```js
async function releaseCapturer() {
let state = audioCapturer.state;
// The audio capturer can be released only when it is not in the STATE_RELEASED or STATE_NEW state.
if (state == audio.AudioState.STATE_RELEASED || state == audio.AudioState.STATE_NEW) {
console.info('AudioRecLog: Capturer already released');
return;
}
await audioCapturer.release(); await audioCapturer.release();
if (audioCapturer.state == audio.AudioState.STATE_RELEASED) {
state = audioCapturer.state;
if (state == audio.AudioState.STATE_RELEASED) {
console.info('AudioRecLog: Capturer released'); console.info('AudioRecLog: Capturer released');
} else { } else {
console.info('AudioRecLog: Capturer release failed'); console.info('AudioRecLog: Capturer release failed');
} }
}
```
6. (Optional) Obtain the audio capturer information.
You can use the following code to obtain the audio capturer information:
```js
// Obtain the audio capturer state.
let state = audioCapturer.state;
// Obtain the audio capturer information.
let audioCapturerInfo : audio.AuduioCapturerInfo = await audioCapturer.getCapturerInfo();
// Obtain the audio stream information.
let audioStreamInfo : audio.AudioStreamInfo = await audioCapturer.getStreamInfo();
// Obtain the audio stream ID.
let audioStreamId : number = await audioCapturer.getAudioStreamId();
// Obtain the Unix timestamp, in nanoseconds.
let audioTime : number = await audioCapturer.getAudioTime();
// Obtain a proper minimum buffer size.
let bufferSize : number = await audioCapturer.getBuffersize();
```
7. (Optional) Use **on('markReach')** to subscribe to the mark reached event, and use **off('markReach')** to unsubscribe from the event.
After the mark reached event is subscribed to, when the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned.
```js
audioCapturer.on('markReach', (reachNumber) => {
console.info('Mark reach event Received');
console.info(`The Capturer reached frame: ${reachNumber}`);
});
audioCapturer.off('markReach'); // Unsubscribe from the mark reached event. This event will no longer be listened for.
```
8. (Optional) Use **on('periodReach')** to subscribe to the period reached event, and use **off('periodReach')** to unsubscribe from the event.
After the period reached event is subscribed to, each time the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned.
```js
audioCapturer.on('periodReach', (reachNumber) => {
console.info('Period reach event Received');
console.info(`In this period, the Capturer reached frame: ${reachNumber}`);
});
audioCapturer.off('periodReach'); // Unsubscribe from the period reached event. This event will no longer be listened for.
```
9. If your application needs to perform some operations when the audio capturer state is updated, it can subscribe to the state change event. When the audio capturer state is updated, the application receives a callback containing the event type.
```js
audioCapturer.on('stateChange', (state) => {
console.info(`AudioCapturerLog: Changed State to : ${state}`)
switch (state) {
case audio.AudioState.STATE_PREPARED:
console.info('--------CHANGE IN AUDIO STATE----------PREPARED--------------');
console.info('Audio State is : Prepared');
break;
case audio.AudioState.STATE_RUNNING:
console.info('--------CHANGE IN AUDIO STATE----------RUNNING--------------');
console.info('Audio State is : Running');
break;
case audio.AudioState.STATE_STOPPED:
console.info('--------CHANGE IN AUDIO STATE----------STOPPED--------------');
console.info('Audio State is : stopped');
break;
case audio.AudioState.STATE_RELEASED:
console.info('--------CHANGE IN AUDIO STATE----------RELEASED--------------');
console.info('Audio State is : released');
break;
default:
console.info('--------CHANGE IN AUDIO STATE----------INVALID--------------');
console.info('Audio State is : invalid');
break;
}
});
``` ```
# Audio Interruption Mode Development # Audio Interruption Mode Development
## When to Use ## Introduction
The audio interruption mode is used to control the playback of multiple audio streams.<br> The audio interruption mode is used to control the playback of multiple audio streams.
Audio applications can set the audio interruption mode to independent or shared under **AudioRenderer**.<br>
In shared mode, multiple audio streams share one session ID. In independent mode, each audio stream has an independent session ID. Audio applications can set the audio interruption mode to independent or shared under **AudioRenderer**.
### Asynchronous Operations In shared mode, multiple audio streams share one session ID. In independent mode, each audio stream has an independent session ID.
To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions. **Asynchronous operation**: To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions.
## How to Develop ## How to Develop
For details about the APIs, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8). For details about the APIs, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8).
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.<br> Set parameters of the **AudioRenderer** instance in **audioRendererOptions**.
Set parameters of the **AudioRenderer** instance in **audioRendererOptions**.<br>
This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.<br>
```js This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.
```js
import audio from '@ohos.multimedia.audio'; import audio from '@ohos.multimedia.audio';
var audioStreamInfo = { var audioStreamInfo = {
...@@ -39,7 +40,7 @@ For details about the APIs, see [AudioRenderer in Audio Management](../reference ...@@ -39,7 +40,7 @@ For details about the APIs, see [AudioRenderer in Audio Management](../reference
rendererInfo: audioRendererInfo rendererInfo: audioRendererInfo
} }
let audioRenderer = await audio.createAudioRenderer(audioRendererOptions); let audioRenderer = await audio.createAudioRenderer(audioRendererOptions);
``` ```
2. Set the audio interruption mode. 2. Set the audio interruption mode.
......
# Audio Rendering Development # Audio Rendering Development
## When to Use ## Introduction
**AudioRenderer** provides APIs for rendering audio files and controlling playback. It also supports audio interruption. You can use the APIs provided by **AudioRenderer** to play audio files in output devices and manage playback tasks. **AudioRenderer** provides APIs for rendering audio files and controlling playback. It also supports audio interruption. You can use the APIs provided by **AudioRenderer** to play audio files in output devices and manage playback tasks.
### Audio Interruption Before calling the APIs, be familiar with the following terms:
When an audio stream with a higher priority needs to be played, the audio renderer interrupts the stream with a lower priority. For example, if a call comes in when the user is listening to music, the music playback, which is the lower priority stream, is paused. For details, see [How to Develop](#how-to-develop). - **Audio interruption**: When an audio stream with a higher priority needs to be played, the audio renderer interrupts the stream with a lower priority. For example, if a call comes in when the user is listening to music, the music playback, which is the lower priority stream, is paused.
- **Status check**: During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioRenderer** instance. This is because some operations can be performed only when the audio renderer is in a given state. If the application performs an operation when the audio renderer is not in the given state, the system may throw an exception or generate other undefined behavior.
- **Asynchronous operation**: To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions. For more information, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8).
- **Audio interruption mode**: OpenHarmony provides two audio interruption modes: **shared mode** and **independent mode**. In shared mode, all **AudioRenderer** instances created by the same application share one focus object, and there is no focus transfer inside the application. Therefore, no callback will be triggered. In independent mode, each **AudioRenderer** instance has an independent focus object, and focus preemption occurs. Focus preemption triggers focus transfer, and the **AudioRenderer** instance that originally has the focus receives a notification through the callback. By default, the shared mode is used. You can call **setInterruptMode()** to set the independent mode.
### State Check ## Working Principles
During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioRenderer** instance. This is because some operations can be performed only when the audio renderer is in a given state. If the application performs an operation when the audio renderer is not in the given state, the system may throw an exception or generate other undefined behavior. The following figure shows the audio renderer state transitions.
**Figure 1** Audio renderer state Figure 1 Audio renderer state transitions
![](figures/audio-renderer-state.png) ![audio-renderer-state](figures/audio-renderer-state.png)
### Asynchronous Operations - **PREPARED**: The audio renderer enters this state by calling **create()**.
- **RUNNING**: The audio renderer enters this state by calling **start()** when it is in the **PREPARED** state or by calling **start()** when it is in the **STOPPED** state.
To ensure that the UI thread is not blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions. For more information, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8). - **PAUSED**: The audio renderer in the **RUNNING** state can call **pause()** to pause the audio playback. After the audio playback is paused, it can call **start()** to resume the playback.
- **STOPPED**: The audio renderer in the **PAUSED** or **RUNNING** state can call **stop()** to stop the playback.
- **RELEASED**: The audio renderer in the **PREPARED**, **PAUSED**, or **STOPPED** state can use **release()** to release all occupied hardware and software resources. It will not transit to any other state after it enters the **RELEASED** state.
## How to Develop ## How to Develop
For details about the APIs, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8).
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance. 1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.
Set parameters of the audio renderer in **audioCapturerOptions**. This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.
Set parameters of the **AudioRenderer** instance in **audioRendererOptions**. This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.
```js ```js
var audioStreamInfo = { import audio from '@ohos.multimedia.audio';
let audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100, samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1, channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
} }
let audioRendererInfo = {
var audioRendererInfo = {
content: audio.ContentType.CONTENT_TYPE_SPEECH, content: audio.ContentType.CONTENT_TYPE_SPEECH,
usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION, usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION,
rendererFlags: 1 rendererFlags: 0 // 0 is the extended flag bit of the audio renderer. The default value is 0.
} }
let audioRendererOptions = {
var audioRendererOptions = {
streamInfo: audioStreamInfo, streamInfo: audioStreamInfo,
rendererInfo: audioRendererInfo rendererInfo: audioRendererInfo
} }
let audioRenderer = await audio.createAudioRenderer(audioRendererOptions); let audioRenderer = await audio.createAudioRenderer(audioRendererOptions);
console.log("Create audio renderer success.");
``` ```
2. Use **on('interrupt')** to subscribe to audio interruption events. 2. Use **start()** to start audio rendering.
Stream-A is interrupted when Stream-B with a higher or equal priority requests to become active and use the output device.
In some cases, the audio renderer performs forcible operations such as pausing and ducking, and notifies the application through **InterruptEvent**. In other cases, the application can choose to act on the **InterruptEvent** or ignore it.
In the case of audio interruption, the application may encounter write failures. To avoid such failures, interruption unaware applications can use **audioRenderer.state** to check the renderer state before writing audio data. The applications can obtain more details by subscribing to the audio interruption events. For details, see [InterruptEvent](../reference/apis/js-apis-audio.md#interruptevent9).
```js
audioRenderer.on('interrupt', (interruptEvent) => {
console.info('InterruptEvent Received');
console.info('InterruptType: ' + interruptEvent.eventType);
console.info('InterruptForceType: ' + interruptEvent.forceType);
console.info('AInterruptHint: ' + interruptEvent.hintType);
if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_FORCE) {
switch (interruptEvent.hintType) {
// Force Pause: Action was taken by framework.
// Halt the write calls to avoid data loss.
case audio.InterruptHint.INTERRUPT_HINT_PAUSE:
isPlay = false;
break;
// Force Stop: Action was taken by framework.
// Halt the write calls to avoid data loss.
case audio.InterruptHint.INTERRUPT_HINT_STOP:
isPlay = false;
break;
// Force Duck: Action was taken by framework,
// just notifying the app that volume has been reduced.
case audio.InterruptHint.INTERRUPT_HINT_DUCK:
break;
// Force Unduck: Action was taken by framework,
// just notifying the app that volume has been restored.
case audio.InterruptHint.INTERRUPT_HINT_UNDUCK:
break;
}
} else if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_SHARE) {
switch (interruptEvent.hintType) {
// Share Resume: Action is to be taken by App.
// Resume the force paused stream if required.
case audio.InterruptHint.INTERRUPT_HINT_RESUME:
startRenderer();
break;
// Share Pause: Stream has been interrupted,
// It can choose to pause or play concurrently.
case audio.InterruptHint.INTERRUPT_HINT_PAUSE:
isPlay = false;
pauseRenderer();
break;
}
}
});
```
3. Use **start()** to start audio rendering.
The renderer state will be **STATE_RUNNING** once the audio renderer is started. The application can then begin reading buffers.
```js ```js
async function startRenderer() { async function startRenderer() {
var state = audioRenderer.state; let state = audioRenderer.state;
// The state should be prepared, paused, or stopped. // The audio renderer should be in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state after being started.
if (state != audio.AudioState.STATE_PREPARED || state != audio.AudioState.STATE_PAUSED || if (state != audio.AudioState.STATE_PREPARED && state != audio.AudioState.STATE_PAUSED &&
state != audio.AudioState.STATE_STOPPED) { state != audio.AudioState.STATE_STOPPED) {
console.info('Renderer is not in a correct state to start'); console.info('Renderer is not in a correct state to start');
return; return;
...@@ -125,59 +79,70 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as ...@@ -125,59 +79,70 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as
} }
} }
``` ```
The renderer state will be **STATE_RUNNING** once the audio renderer is started. The application can then begin reading buffers.
4. Call **write()** to write data to the buffer. 3. Call **write()** to write data to the buffer.
Read the audio data to be played to the buffer. Call **write()** repeatedly to write the data to the buffer. Read the audio data to be played to the buffer. Call **write()** repeatedly to write the data to the buffer.
```js ```js
import fileio from '@ohos.fileio';
import audio from '@ohos.multimedia.audio';
async function writeBuffer(buf) { async function writeBuffer(buf) {
var state = audioRenderer.state; // The write operation can be performed only when the state is STATE_RUNNING.
if (state != audio.AudioState.STATE_RUNNING) { if (audioRenderer.state != audio.AudioState.STATE_RUNNING) {
console.error('Renderer is not running, do not write'); console.error('Renderer is not running, do not write');
isPlay = false;
return; return;
} }
let writtenbytes = await audioRenderer.write(buf); let writtenbytes = await audioRenderer.write(buf);
console.info(`Actual written bytes: ${writtenbytes} `);
console.info('Actual written bytes: ' + writtenbytes);
if (writtenbytes < 0) { if (writtenbytes < 0) {
console.error('Write buffer failed. check the state of renderer'); console.error('Write buffer failed. check the state of renderer');
} }
} }
// Reasonable minimum buffer size for renderer. However, the renderer can accept other read sizes as well. // Set a proper buffer size for the audio renderer. You can also select a buffer of another size.
const bufferSize = await audioRenderer.getBufferSize(); const bufferSize = await audioRenderer.getBufferSize();
const path = '/data/file_example_WAV_2MG.wav'; let dir = globalThis.fileDir; // You must use the sandbox path.
const path = dir + '/file_example_WAV_2MG.wav'; // The file to render is in the following path: /data/storage/el2/base/haps/entry/files/file_example_WAV_2MG.wav
console.info(`file path: ${ path}`);
let ss = fileio.createStreamSync(path, 'r'); let ss = fileio.createStreamSync(path, 'r');
const totalSize = 2146166; // file_example_WAV_2MG.wav const totalSize = fileio.statSync(path).size; // Size of the file to render.
let rlen = 0; let discardHeader = new ArrayBuffer(bufferSize);
let discardHeader = new ArrayBuffer(44);
ss.readSync(discardHeader); ss.readSync(discardHeader);
rlen += 44; let rlen = 0;
rlen += bufferSize;
var id = setInterval(() => { let id = setInterval(() => {
if (isPlay || isRelease) { if (audioRenderer.state == audio.AudioState.STATE_RELEASED) { // The rendering stops if the audio renderer is in the STATE_RELEASED state.
if (rlen >= totalSize || isRelease) { ss.closeSync();
await audioRenderer.stop();
clearInterval(id);
}
if (audioRenderer.state == audio.AudioState.STATE_RUNNING) {
if (rlen >= totalSize) { // The rendering stops if the file finishes reading.
ss.closeSync(); ss.closeSync();
stopRenderer(); await audioRenderer.stop();
clearInterval(id); clearInterval(id);
} }
let buf = new ArrayBuffer(bufferSize); let buf = new ArrayBuffer(bufferSize);
rlen += ss.readSync(buf); rlen += ss.readSync(buf);
console.info('Total bytes read from file: ' + rlen); console.info(`Total bytes read from file: ${rlen}`);
writeBuffer(buf); writeBuffer(buf);
} else { } else {
console.info('check after next interval'); console.info('check after next interval');
} }
} , 30); // interval to be set based on audio file format }, 30); // The timer interval is set based on the audio format. The unit is millisecond.
``` ```
5. (Optional) Call **pause()** or **stop()** to pause or stop rendering. 4. (Optional) Call **pause()** or **stop()** to pause or stop rendering.
```js ```js
async function pauseRenderer() { async function pauseRenderer() {
var state = audioRenderer.state; let state = audioRenderer.state;
// The audio renderer can be paused only when it is in the STATE_RUNNING state.
if (state != audio.AudioState.STATE_RUNNING) { if (state != audio.AudioState.STATE_RUNNING) {
console.info('Renderer is not running'); console.info('Renderer is not running');
return; return;
...@@ -194,8 +159,9 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as ...@@ -194,8 +159,9 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as
} }
async function stopRenderer() { async function stopRenderer() {
var state = audioRenderer.state; let state = audioRenderer.state;
if (state != audio.AudioState.STATE_RUNNING || state != audio.AudioState.STATE_PAUSED) { // The audio renderer can be stopped only when it is in STATE_RUNNING or STATE_PAUSED state.
if (state != audio.AudioState.STATE_RUNNING && state != audio.AudioState.STATE_PAUSED) {
console.info('Renderer is not running or paused'); console.info('Renderer is not running or paused');
return; return;
} }
...@@ -211,25 +177,372 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as ...@@ -211,25 +177,372 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as
} }
``` ```
5. (Optional) Call **drain()** to clear the buffer.
```js
async function drainRenderer() {
let state = audioRenderer.state;
// drain() can be used only when the audio renderer is in the STATE_RUNNING state.
if (state != audio.AudioState.STATE_RUNNING) {
console.info('Renderer is not running');
return;
}
await audioRenderer.drain();
state = audioRenderer.state;
}
```
6. After the task is complete, call **release()** to release related resources. 6. After the task is complete, call **release()** to release related resources.
**AudioRenderer** uses a large number of system resources. Therefore, ensure that the resources are released after the task is complete. **AudioRenderer** uses a large number of system resources. Therefore, ensure that the resources are released after the task is complete.
```js ```js
async function releaseRenderer() { async function releaseRenderer() {
if (state_ == RELEASED || state_ == NEW) { let state = audioRenderer.state;
console.info('Resourced already released'); // The audio renderer can be released only when it is not in the STATE_RELEASED or STATE_NEW state.
if (state == audio.AudioState.STATE_RELEASED || state == audio.AudioState.STATE_NEW) {
console.info('Renderer already released');
return; return;
} }
await audioRenderer.release(); await audioRenderer.release();
state = audioRenderer.state; state = audioRenderer.state;
if (state == STATE_RELEASED) { if (state == audio.AudioState.STATE_RELEASED) {
console.info('Renderer released'); console.info('Renderer released');
} else { } else {
console.info('Renderer release failed'); console.info('Renderer release failed');
} }
}
```
7. (Optional) Obtain the audio renderer information.
You can use the following code to obtain the audio renderer information:
```js
// Obtain the audio renderer state.
let state = audioRenderer.state;
// Obtain the audio renderer information.
let audioRendererInfo : audio.AudioRendererInfo = await audioRenderer.getRendererInfo();
// Obtain the audio stream information.
let audioStreamInfo : audio.AudioStreamInfo = await audioRenderer.getStreamInfo();
// Obtain the audio stream ID.
let audioStreamId : number = await audioRenderer.getAudioStreamId();
// Obtain the Unix timestamp, in nanoseconds.
let audioTime : number = await audioRenderer.getAudioTime();
// Obtain a proper minimum buffer size.
let bufferSize : number = await audioRenderer.getBuffersize();
// Obtain the audio renderer rate.
let renderRate : audio.AudioRendererRate = await audioRenderer.getRenderRate();
```
8. (Optional) Set the audio renderer information.
You can use the following code to set the audio renderer information:
```js
// Set the audio renderer rate to RENDER_RATE_NORMAL.
let renderRate : audio.AudioRendererRate = audio.AudioRendererRate.RENDER_RATE_NORMAL;
await audioRenderer.setRenderRate(renderRate);
// Set the interruption mode of the audio renderer to SHARE_MODE.
let interruptMode : audio.InterruptMode = audio.InterruptMode.SHARE_MODE;
await audioRenderer.setInterruptMode(interruptMode);
// Set the volume of the stream to 10.
let volume : number = 10;
await audioRenderer.setVolume(volume);
```
9. (Optional) Use **on('audioInterrupt')** to subscribe to the audio interruption event, and use **off('audioInterrupt')** to unsubscribe from the event.
Audio interruption means that Stream A will be interrupted when Stream B with a higher or equal priority requests to become active and use the output device.
In some cases, the audio renderer performs forcible operations such as pausing and ducking, and notifies the application through **InterruptEvent**. In other cases, the application can choose to act on the **InterruptEvent** or ignore it.
In the case of audio interruption, the application may encounter write failures. To avoid such failures, interruption-unaware applications can use **audioRenderer.state** to check the audio renderer state before writing audio data. The applications can obtain more details by subscribing to the audio interruption events. For details, see [InterruptEvent](../reference/apis/js-apis-audio.md#interruptevent9).
It should be noted that the audio interruption event subscription of the **AudioRenderer** module is slightly different from **on('interrupt')** in [AudioManager](../reference/apis/js-apis-audio.md#audiomanager). The **on('interrupt')** and **off('interrupt')** APIs are deprecated since API version 9. In the **AudioRenderer** module, you only need to call **on('audioInterrupt')** to listen for focus change events. When the **AudioRenderer** instance created by the application performs actions such as start, stop, and pause, it requests the focus, which triggers focus transfer and in return enables the related **AudioRenderer** instance to receive a notification through the callback. For instances other than **AudioRenderer**, such as frequency modulation (FM) and voice wakeup, the application does not create an instance. In this case, the application can call **on('interrupt')** in **AudioManager** to receive a focus change notification.
```js
audioRenderer.on('audioInterrupt', (interruptEvent) => {
console.info('InterruptEvent Received');
console.info(`InterruptType: ${interruptEvent.eventType}`);
console.info(`InterruptForceType: ${interruptEvent.forceType}`);
console.info(`AInterruptHint: ${interruptEvent.hintType}`);
if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_FORCE) {
switch (interruptEvent.hintType) {
// Forcible pausing initiated by the audio framework. To prevent data loss, stop the write operation.
case audio.InterruptHint.INTERRUPT_HINT_PAUSE:
isPlay = false;
break;
// Forcible stopping initiated by the audio framework. To prevent data loss, stop the write operation.
case audio.InterruptHint.INTERRUPT_HINT_STOP:
isPlay = false;
break;
// Forcible ducking initiated by the audio framework.
case audio.InterruptHint.INTERRUPT_HINT_DUCK:
break;
// Undocking initiated by the audio framework.
case audio.InterruptHint.INTERRUPT_HINT_UNDUCK:
break;
}
} else if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_SHARE) {
switch (interruptEvent.hintType) {
// Notify the application that the rendering starts.
case audio.InterruptHint.INTERRUPT_HINT_RESUME:
startRenderer();
break;
// Notify the application that the audio stream is interrupted. The application determines whether to continue. (In this example, the application pauses the rendering.)
case audio.InterruptHint.INTERRUPT_HINT_PAUSE:
isPlay = false;
pauseRenderer();
break;
}
}
});
audioRenderer.off('audioInterrupt'); // Unsubscribe from the audio interruption event. This event will no longer be received.
```
10. (Optional) Use **on('markReach')** to subscribe to the mark reached event, and use **off('markReach')** to unsubscribe from the event.
After the mark reached event is subscribed to, when the number of frames rendered by the audio renderer reaches the specified value, a callback is triggered and the specified value is returned.
```js
audioRenderer.on('markReach', (reachNumber) => {
console.info('Mark reach event Received');
console.info(`The renderer reached frame: ${reachNumber}`);
});
audioRenderer.off('markReach'); // Unsubscribe from the mark reached event. This event will no longer be listened for.
```
11. (Optional) Use **on('periodReach')** to subscribe to the period reached event, and use **off('periodReach')** to unsubscribe from the event.
After the period reached event is subscribed to, each time the number of frames rendered by the audio renderer reaches the specified value, a callback is triggered and the specified value is returned.
```js
audioRenderer.on('periodReach', (reachNumber) => {
console.info('Period reach event Received');
console.info(`In this period, the renderer reached frame: ${reachNumber} `);
});
audioRenderer.off('periodReach'); // Unsubscribe from the period reached event. This event will no longer be listened for.
```
12. (Optional) Use **on('stateChange')** to subscribe to audio renderer state changes.
After the **stateChange** event is subscribed to, when the audio renderer state changes, a callback is triggered and the audio renderer state is returned.
```js
audioRenderer.on('stateChange', (audioState) => {
console.info('State change event Received');
console.info(`Current renderer state is: ${audioState}`);
});
```
13. (Optional) Handle exceptions of **on()**.
If the string or the parameter type passed in **on()** is incorrect , the application throws an exception. In this case, you can use **try catch** to capture the exception.
```js
try {
audioRenderer.on('invalidInput', () => { // The string does not match.
})
} catch (err) {
console.info(`Call on function error, ${err}`); // The application throws exception 401.
}
try {
audioRenderer.on(1, () => { // The type of the input parameter is incorrect.
})
} catch (err) {
console.info(`Call on function error, ${err}`); // The application throws exception 6800101.
}
```
14. (Optional) Refer to the complete example of **on('audioInterrupt')**.
Create **AudioRender1** and **AudioRender2** in an application, configure the independent interruption mode, and call **on('audioInterrupt')** to subscribe to audio interruption events. At the beginning, **AudioRender1** has the focus. When **AudioRender2** attempts to obtain the focus, **AudioRenderer1** receives a focus transfer notification and the related log information is printed. If the shared mode is used, the log information will not be printed during application running.
```js
async runningAudioRender1(){
let audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_48000,
channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S32LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
}
let audioRendererInfo = {
content: audio.ContentType.CONTENT_TYPE_MUSIC,
usage: audio.StreamUsage.STREAM_USAGE_MEDIA,
rendererFlags: 0 // 0 is the extended flag bit of the audio renderer. The default value is 0.
} }
let audioRendererOptions = {
streamInfo: audioStreamInfo,
rendererInfo: audioRendererInfo
}
// 1.1 Create an instance.
audioRenderer1 = await audio.createAudioRenderer(audioRendererOptions);
console.info("Create audio renderer 1 success.");
// 1.2 Set the independent mode.
audioRenderer1.setInterruptMode(1).then( data => {
console.info('audioRenderer1 setInterruptMode Success!');
}).catch((err) => {
console.error(`audioRenderer1 setInterruptMode Fail: ${err}`);
});
// 1.3 Set the listener.
audioRenderer1.on('audioInterrupt', async(interruptEvent) => {
console.info(`audioRenderer1 on audioInterrupt : ${JSON.stringify(interruptEvent)}`)
});
// 1.4 Start rendering.
await audioRenderer1.start();
console.info('startAudioRender1 success');
// 1.5 Obtain the buffer size, which is the proper minimum buffer size of the audio renderer. You can also select a buffer of another size.
const bufferSize = await audioRenderer1.getBufferSize();
console.info(`audio bufferSize: ${bufferSize}`);
// 1.6 Obtain the original audio data file.
let dir = globalThis.fileDir; // You must use the sandbox path.
const path1 = dir + '/music001_48000_32_1.wav'; // The file to render is in the following path: /data/storage/el2/base/haps/entry/files/music001_48000_32_1.wav
console.info(`audioRender1 file path: ${ path1}`);
let ss1 = await fileio.createStream(path1,'r');
const totalSize1 = fileio.statSync(path1).size; // Size of the file to render.
console.info(`totalSize1 -------: ${totalSize1}`);
let discardHeader = new ArrayBuffer(bufferSize);
ss1.readSync(discardHeader);
let rlen = 0;
rlen += bufferSize;
// 1.7 Render the original audio data in the buffer by using audioRenderer.
let id = setInterval(async () => {
if (audioRenderer1.state == audio.AudioState.STATE_RELEASED) { // The rendering stops if the audio renderer is in the STATE_RELEASED state.
ss1.closeSync();
audioRenderer1.stop();
clearInterval(id);
}
if (audioRenderer1.state == audio.AudioState.STATE_RUNNING) {
if (rlen >= totalSize1) { // The rendering stops if the file finishes reading.
ss1.closeSync();
await audioRenderer1.stop();
clearInterval(id);
}
let buf = new ArrayBuffer(bufferSize);
rlen += ss1.readSync(buf);
console.info(`Total bytes read from file: ${rlen}`);
await writeBuffer(buf, that.audioRenderer1);
} else {
console.info('check after next interval');
}
}, 30); // The timer interval is set based on the audio format. The unit is millisecond.
}
async runningAudioRender2(){
let audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_48000,
channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S32LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
}
let audioRendererInfo = {
content: audio.ContentType.CONTENT_TYPE_MUSIC,
usage: audio.StreamUsage.STREAM_USAGE_MEDIA,
rendererFlags: 0 // 0 is the extended flag bit of the audio renderer. The default value is 0.
}
let audioRendererOptions = {
streamInfo: audioStreamInfo,
rendererInfo: audioRendererInfo
}
// 2.1 Create another instance.
audioRenderer2 = await audio.createAudioRenderer(audioRendererOptions);
console.info("Create audio renderer 2 success.");
// 2.2 Set the independent mode.
audioRenderer2.setInterruptMode(1).then( data => {
console.info('audioRenderer2 setInterruptMode Success!');
}).catch((err) => {
console.error(`audioRenderer2 setInterruptMode Fail: ${err}`);
});
// 2.3 Set the listener.
audioRenderer2.on('audioInterrupt', async(interruptEvent) => {
console.info(`audioRenderer2 on audioInterrupt : ${JSON.stringify(interruptEvent)}`)
});
// 2.4 Start rendering.
await audioRenderer2.start();
console.info('startAudioRender2 success');
// 2.5 Obtain the buffer size.
const bufferSize = await audioRenderer2.getBufferSize();
console.info(`audio bufferSize: ${bufferSize}`);
// 2.6 Read the original audio data file.
let dir = globalThis.fileDir; // You must use the sandbox path.
const path2 = dir + '/music002_48000_32_1.wav'; // The file to render is in the following path: /data/storage/el2/base/haps/entry/files/music002_48000_32_1.wav
console.error(`audioRender1 file path: ${ path2}`);
let ss2 = await fileio.createStream(path2,'r');
const totalSize2 = fileio.statSync(path2).size; // Size of the file to render.
console.error(`totalSize2 -------: ${totalSize2}`);
let discardHeader2 = new ArrayBuffer(bufferSize);
ss2.readSync(discardHeader2);
let rlen = 0;
rlen += bufferSize;
// 2.7 Render the original audio data in the buffer by using audioRenderer.
let id = setInterval(async () => {
if (audioRenderer2.state == audio.AudioState.STATE_RELEASED) { // The rendering stops if the audio renderer is in the STATE_RELEASED state.
ss2.closeSync();
that.audioRenderer2.stop();
clearInterval(id);
}
if (audioRenderer1.state == audio.AudioState.STATE_RUNNING) {
if (rlen >= totalSize2) { // The rendering stops if the file finishes reading.
ss2.closeSync();
await audioRenderer2.stop();
clearInterval(id);
}
let buf = new ArrayBuffer(bufferSize);
rlen += ss2.readSync(buf);
console.info(`Total bytes read from file: ${rlen}`);
await writeBuffer(buf, that.audioRenderer2);
} else {
console.info('check after next interval');
}
}, 30); // The timer interval is set based on the audio format. The unit is millisecond.
}
async writeBuffer(buf, audioRender) {
let writtenbytes;
await audioRender.write(buf).then((value) => {
writtenbytes = value;
console.info(`Actual written bytes: ${writtenbytes} `);
});
if (typeof(writtenbytes) != 'number' || writtenbytes < 0) {
console.error('get Write buffer failed. check the state of renderer');
}
}
// Integrated invoking entry.
async test(){
await runningAudioRender1();
await runningAudioRender2();
}
``` ```
# Audio Stream Management Development # Audio Stream Management Development
## When to Use ## Introduction
You can use **AudioStreamManager** to manage audio streams. You can use **AudioStreamManager** to manage audio streams.
### Development Process ## Working Principles
During application development, use **getStreamManager()** to create an **AudioStreamManager** instance. Then, you can call **on('audioRendererChange')** or **on('audioCapturerChange')** to listen for status, client, and audio attribute changes of the audio playback or recording application. To cancel the listening for these changes, call **off('audioRendererChange')** or **off('audioCapturerChange')**. You can call **getCurrentAudioRendererInfoArray()** to obtain information about the audio playback application, such as the unique audio stream ID, UID of the audio playback client, and audio status. Similarly, you can call **getCurrentAudioCapturerInfoArray()** to obtain information about the audio recording application. The figure below shows the invoking relationship. The following figure shows the calling relationship of **AudioStreamManager** APIs.
For details about the APIs, see [AudioStreamManager](../reference/apis/js-apis-audio.md#audiostreammanager9). **Figure 1** AudioStreamManager API calling relationship
**Figure 1** Audio stream management invoking relationship ![en-us_image_audio_stream_manager](figures/en-us_image_audio_stream_manager.png)
![](figures/en-us_image_audio_stream_manager.png) **NOTE**: During application development, use **getStreamManager()** to create an **AudioStreamManager** instance. Then, you can call **on('audioRendererChange')** or **on('audioCapturerChange')** to listen for status, client, and audio attribute changes of the audio playback or recording application. To cancel the listening for these changes, call **off('audioRendererChange')** or **off('audioCapturerChange')**. You can call **getCurrentAudioRendererInfoArray()** to obtain information about the audio playback application, such as the unique audio stream ID, UID of the audio playback client, and audio status. Similarly, you can call **getCurrentAudioCapturerInfoArray()** to obtain information about the audio recording application.
## How to Develop ## How to Develop
For details about the APIs, see [AudioStreamManager](../reference/apis/js-apis-audio.md#audiostreammanager9).
1. Create an **AudioStreamManager** instance. 1. Create an **AudioStreamManager** instance.
Before using **AudioStreamManager** APIs, you must use **getStreamManager()** to create an **AudioStreamManager** instance. Before using **AudioStreamManager** APIs, you must use **getStreamManager()** to create an **AudioStreamManager** instance.
```js ```js
var audioStreamManager = audio.getStreamManager(); var audioManager = audio.getAudioManager();
var audioStreamManager = audioManager.getStreamManager();
``` ```
2. (Optional) Call **on('audioRendererChange')** to listen for audio renderer changes. 2. (Optional) Call **on('audioRendererChange')** to listen for audio renderer changes.
If an application needs to receive notifications when the audio playback application status, audio playback client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
If an application needs to receive notifications when the audio playback application status, audio playback client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
```js ```js
audioStreamManager.on('audioRendererChange', (AudioRendererChangeInfoArray) => { audioStreamManager.on('audioRendererChange', (AudioRendererChangeInfoArray) => {
...@@ -61,7 +65,8 @@ If an application needs to receive notifications when the audio playback applica ...@@ -61,7 +65,8 @@ If an application needs to receive notifications when the audio playback applica
``` ```
4. (Optional) Call **on('audioCapturerChange')** to listen for audio capturer changes. 4. (Optional) Call **on('audioCapturerChange')** to listen for audio capturer changes.
If an application needs to receive notifications when the audio recording application status, audio recording client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
If an application needs to receive notifications when the audio recording application status, audio recording client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md).
```js ```js
audioStreamManager.on('audioCapturerChange', (AudioCapturerChangeInfoArray) => { audioStreamManager.on('audioCapturerChange', (AudioCapturerChangeInfoArray) => {
...@@ -94,7 +99,8 @@ If an application needs to receive notifications when the audio recording applic ...@@ -94,7 +99,8 @@ If an application needs to receive notifications when the audio recording applic
``` ```
6. (Optional) Call **getCurrentAudioRendererInfoArray()** to obtain information about the current audio renderer. 6. (Optional) Call **getCurrentAudioRendererInfoArray()** to obtain information about the current audio renderer.
This API can be used to obtain the unique ID of the audio stream, UID of the audio playback client, audio status, and other information about the audio player. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly.
This API can be used to obtain the unique ID of the audio stream, UID of the audio playback client, audio status, and other information about the audio player. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly.
```js ```js
await audioStreamManager.getCurrentAudioRendererInfoArray().then( function (AudioRendererChangeInfoArray) { await audioStreamManager.getCurrentAudioRendererInfoArray().then( function (AudioRendererChangeInfoArray) {
...@@ -127,7 +133,7 @@ This API can be used to obtain the unique ID of the audio stream, UID of the aud ...@@ -127,7 +133,7 @@ This API can be used to obtain the unique ID of the audio stream, UID of the aud
``` ```
7. (Optional) Call **getCurrentAudioCapturerInfoArray()** to obtain information about the current audio capturer. 7. (Optional) Call **getCurrentAudioCapturerInfoArray()** to obtain information about the current audio capturer.
This API can be used to obtain the unique ID of the audio stream, UID of the audio recording client, audio status, and other information about the audio capturer. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly. This API can be used to obtain the unique ID of the audio stream, UID of the audio recording client, audio status, and other information about the audio capturer. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly.
```js ```js
await audioStreamManager.getCurrentAudioCapturerInfoArray().then( function (AudioCapturerChangeInfoArray) { await audioStreamManager.getCurrentAudioCapturerInfoArray().then( function (AudioCapturerChangeInfoArray) {
......
# OpenSL ES Audio Recording Development # OpenSL ES Audio Recording Development
## When to Use ## Introduction
You can use OpenSL ES to develop the audio recording function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned. You can use OpenSL ES to develop the audio recording function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned.
......
# OpenSL ES Audio Playback Development # OpenSL ES Audio Playback Development
## When to Use ## Introduction
You can use OpenSL ES to develop the audio playback function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned. You can use OpenSL ES to develop the audio playback function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned.
...@@ -58,7 +58,7 @@ To use OpenSL ES to develop the audio playback function in OpenHarmony, perform ...@@ -58,7 +58,7 @@ To use OpenSL ES to develop the audio playback function in OpenHarmony, perform
5. Obtain the **bufferQueueItf** instance of the **SL_IID_OH_BUFFERQUEUE** interface. 5. Obtain the **bufferQueueItf** instance of the **SL_IID_OH_BUFFERQUEUE** interface.
``` ```c++
SLOHBufferQueueItf bufferQueueItf; SLOHBufferQueueItf bufferQueueItf;
(*pcmPlayerObject)->GetInterface(pcmPlayerObject, SL_IID_OH_BUFFERQUEUE, &bufferQueueItf); (*pcmPlayerObject)->GetInterface(pcmPlayerObject, SL_IID_OH_BUFFERQUEUE, &bufferQueueItf);
``` ```
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
# Audio Error Codes
## 6800101 Invalid Parameter
**Error Message**
invalid parameter.
**Description**
A parameter passed in the API is invalid.
**Possible Causes**
The parameter is invalid. For example, the parameter value is not within the range supported.
**Solution**
Pass the correct parameters in the API.
## 6800102 Memory Allocation Failure
**Error Message**
allocate memory failed.
**Description**
When the API is called, the memory fails to be allocated or a null pointer occurs.
**Possible Causes**
1. The system does not have sufficient memory for mapping.
2. Invalid instances are not destroyed in time to release the memory.
**Solution**
1. Destroy the existing instances.
2. Create a new instance. If the creation fails, stop related operations.
## 6800103 Unsupported State
**Error Message**
Operation not permit at current state.
**Description**
This operation is not allowed in the current state.
**Possible Causes**
The operation is not supported in the current state. For example, data is played before streams are started.
**Solution**
1. Check whether this operation is supported in the current state.
2. Switch the instance to the correct state and perform the operation.
## 6800104 Unsupported Parameter Value
**Error Message**
unsupported operation.
**Description**
The parameter value is not supported.
**Possible Causes**
The value of the input parameter is not within the range supported.
**Solution**
1. Check the enums or other input parameters supported by the API.
2. Use a supported value.
## 6800105 Processing Timeout
**Error Message**
time out.
**Description**
Waiting for external processing times out.
**Possible Causes**
Waiting for external processing times out. For example, waiting for the application to fill in audio data times out.
**Solution**
Control the time of the write operation, for example, adding delayed processing.
## 6800201 Too Many Audio Streams
**Error Message**
stream number limited.
**Description**
The number of audio streams reaches the upper limit.
**Possible Causes**
Invalid audio streams are not released in time.
**Solution**
Release audio streams that are no longer used.
## 6800301 System Error
**Error Message**
system error.
**Description**
The system processing is abnormal.
**Possible Causes**
The system processing is abnormal, for example, system service restart or IPC exceptions.
**Solution**
Create the service again.
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册