提交 903347fc 编写于 作者: W wusongqing

update docs against 5573

Signed-off-by: Nwusongqing <wusongqing@huawei.com>
上级 ca56d099
# Audio Overview<a name="EN-US_TOPIC_0000001147055469"></a> # Audio Overview
You can use APIs provided by the audio module to implement audio-related features, including audio playback and volume management. You can use APIs provided by the audio module to implement audio-related features, including audio playback and volume management.
>![](../public_sys-resources/icon-note.gif) **NOTE** ## Basic Concepts
>Due to permission issues, the above features are temporarily unavailable for the standard system.
## Basic Concepts<a name="section296512102281"></a> - **Sampling**
Sampling is a process to obtain discrete-time signals by extracting samples from analog signals in a continuous time domain at a specific interval.
- **Sampling** - **Sampling rate**
Sampling rate is the number of samples extracted from a continuous signal per second to form a discrete signal. It is measured in Hz. Generally, human hearing range is from 20 Hz to 20 kHz. Common audio sampling rates include 8 kHz, 11.025 kHz, 22.05 kHz, 16 kHz, 37.8 kHz, 44.1 kHz, 48 kHz, 96 kHz, and 192 kHz.
Sampling is a process to obtain discrete-time signals by extracting samples from analog signals in a continuous time domain at a specific interval. - **Channel**
Channels refer to different spatial positions where independent audio signals are recorded or played. The number of channels is the number of audio sources used during audio recording, or the number of speakers used for audio playback.
- **Sampling rate**
Sampling rate is the number of samples extracted from a continuous signal per second to form a discrete signal. It is measured in Hz. Generally, human hearing range is from 20 Hz to 20 kHz. Common audio sampling rates include 8 kHz, 11.025 kHz, 22.05 kHz, 16 kHz, 37.8 kHz, 44.1 kHz, 48 kHz, 96 kHz, and 192 kHz.
- **Channel**
Channels refer to different spatial positions where independent audio signals are recorded or played. The number of channels is the number of audio sources used during audio recording, or the number of speakers used for audio playback.
- **Audio frame**
Audio data is in stream form. For the convenience of audio algorithm processing and transmission, it is generally agreed that a data amount in a unit of 2.5 to 60 milliseconds is one audio frame. This unit is called sampling time, and its length is specific to codecs and the application requirements.
- **PCM**
Pulse code modulation \(PCM\) is a method used to digitally represent sampled analog signals. It converts continuous-time analog signals into discrete-time digital signal samples.
- **Audio frame**
Audio data is in stream form. For the convenience of audio algorithm processing and transmission, it is generally agreed that a data amount in a unit of 2.5 to 60 milliseconds is one audio frame. This unit is called sampling time, and its length is specific to codecs and the application requirements.
- **PCM**<br>
Pulse code modulation (PCM) is a method used to digitally represent sampled analog signals. It converts continuous-time analog signals into discrete-time digital signal samples.
...@@ -20,8 +20,6 @@ During application development, you are advised to use **on('stateChange')** to ...@@ -20,8 +20,6 @@ During application development, you are advised to use **on('stateChange')** to
To ensure that the UI thread is not blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions. For more information, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8). To ensure that the UI thread is not blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions. For more information, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8).
## How to Develop ## How to Develop
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance. 1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.
...@@ -31,7 +29,7 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as ...@@ -31,7 +29,7 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as
var audioStreamInfo = { var audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100, samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1, channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
} }
...@@ -58,49 +56,49 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as ...@@ -58,49 +56,49 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as
In the case of audio interruption, the application may encounter write failures. To avoid such failures, interruption unaware applications can use **audioRenderer.state** to check the renderer state before writing audio data. The applications can obtain more details by subscribing to the audio interruption events. For details, see [InterruptEvent](../reference/apis/js-apis-audio.md#interruptevent9). In the case of audio interruption, the application may encounter write failures. To avoid such failures, interruption unaware applications can use **audioRenderer.state** to check the renderer state before writing audio data. The applications can obtain more details by subscribing to the audio interruption events. For details, see [InterruptEvent](../reference/apis/js-apis-audio.md#interruptevent9).
```js ```js
audioRenderer.on('interrupt', (interruptEvent) => { audioRenderer.on('interrupt', (interruptEvent) => {
console.info('InterruptEvent Received'); console.info('InterruptEvent Received');
console.info('InterruptType: ' + interruptEvent.eventType); console.info('InterruptType: ' + interruptEvent.eventType);
console.info('InterruptForceType: ' + interruptEvent.forceType); console.info('InterruptForceType: ' + interruptEvent.forceType);
console.info('AInterruptHint: ' + interruptEvent.hintType); console.info('AInterruptHint: ' + interruptEvent.hintType);
if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_FORCE) { if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_FORCE) {
switch (interruptEvent.hintType) { switch (interruptEvent.hintType) {
// Force Pause: Action was taken by framework. // Force Pause: Action was taken by framework.
// Halt the write calls to avoid data loss. // Halt the write calls to avoid data loss.
case audio.InterruptHint.INTERRUPT_HINT_PAUSE: case audio.InterruptHint.INTERRUPT_HINT_PAUSE:
isPlay = false; isPlay = false;
break; break;
// Force Stop: Action was taken by framework. // Force Stop: Action was taken by framework.
// Halt the write calls to avoid data loss. // Halt the write calls to avoid data loss.
case audio.InterruptHint.INTERRUPT_HINT_STOP: case audio.InterruptHint.INTERRUPT_HINT_STOP:
isPlay = false; isPlay = false;
break; break;
// Force Duck: Action was taken by framework, // Force Duck: Action was taken by framework,
// just notifying the app that volume has been reduced. // just notifying the app that volume has been reduced.
case audio.InterruptHint.INTERRUPT_HINT_DUCK: case audio.InterruptHint.INTERRUPT_HINT_DUCK:
break; break;
// Force Unduck: Action was taken by framework, // Force Unduck: Action was taken by framework,
// just notifying the app that volume has been restored. // just notifying the app that volume has been restored.
case audio.InterruptHint.INTERRUPT_HINT_UNDUCK: case audio.InterruptHint.INTERRUPT_HINT_UNDUCK:
break; break;
} }
} else if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_SHARE) { } else if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_SHARE) {
switch (interruptEvent.hintType) { switch (interruptEvent.hintType) {
// Share Resume: Action is to be taken by App. // Share Resume: Action is to be taken by App.
// Resume the force paused stream if required. // Resume the force paused stream if required.
case audio.InterruptHint.INTERRUPT_HINT_RESUME: case audio.InterruptHint.INTERRUPT_HINT_RESUME:
startRenderer(); startRenderer();
break; break;
// Share Pause: Stream has been interrupted, // Share Pause: Stream has been interrupted,
// It can choose to pause or play concurrently. // It can choose to pause or play concurrently.
case audio.InterruptHint.INTERRUPT_HINT_PAUSE: case audio.InterruptHint.INTERRUPT_HINT_PAUSE:
isPlay = false; isPlay = false;
pauseRenderer(); pauseRenderer();
break; break;
} }
} }
}); });
``` ```
3. Use **start()** to start audio rendering. 3. Use **start()** to start audio rendering.
...@@ -178,38 +176,38 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as ...@@ -178,38 +176,38 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as
5. (Optional) Call **pause()** or **stop()** to pause or stop rendering. 5. (Optional) Call **pause()** or **stop()** to pause or stop rendering.
```js ```js
async function pauseRenderer() { async function pauseRenderer() {
var state = audioRenderer.state; var state = audioRenderer.state;
if (state != audio.AudioState.STATE_RUNNING) { if (state != audio.AudioState.STATE_RUNNING) {
console.info('Renderer is not running'); console.info('Renderer is not running');
return; return;
} }
await audioRenderer.pause(); await audioRenderer.pause();
state = audioRenderer.state; state = audioRenderer.state;
if (state == audio.AudioState.STATE_PAUSED) { if (state == audio.AudioState.STATE_PAUSED) {
console.info('Renderer paused'); console.info('Renderer paused');
} else { } else {
console.error('Renderer pause failed'); console.error('Renderer pause failed');
} }
} }
async function stopRenderer() {
var state = audioRenderer.state;
if (state != audio.AudioState.STATE_RUNNING || state != audio.AudioState.STATE_PAUSED) {
console.info('Renderer is not running or paused');
return;
}
await audioRenderer.stop();
state = audioRenderer.state; async function stopRenderer() {
if (state == audio.AudioState.STATE_STOPPED) { var state = audioRenderer.state;
console.info('Renderer stopped'); if (state != audio.AudioState.STATE_RUNNING || state != audio.AudioState.STATE_PAUSED) {
} else { console.info('Renderer is not running or paused');
console.error('Renderer stop failed'); return;
} }
await audioRenderer.stop();
state = audioRenderer.state;
if (state == audio.AudioState.STATE_STOPPED) {
console.info('Renderer stopped');
} else {
console.error('Renderer stop failed');
}
} }
``` ```
...@@ -218,22 +216,20 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as ...@@ -218,22 +216,20 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as
**AudioRenderer** uses a large number of system resources. Therefore, ensure that the resources are released after the task is complete. **AudioRenderer** uses a large number of system resources. Therefore, ensure that the resources are released after the task is complete.
```js ```js
async function releaseRenderer() { async function releaseRenderer() {
if (state_ == RELEASED || state_ == NEW) { if (state_ == RELEASED || state_ == NEW) {
console.info('Resourced already released'); console.info('Resourced already released');
return; return;
} }
await audioRenderer.release();
state = audioRenderer.state;
if (state == STATE_RELEASED) {
console.info('Renderer released');
} else {
console.info('Renderer release failed');
}
}
```
await audioRenderer.release();
state = audioRenderer.state;
if (state == STATE_RELEASED) {
console.info('Renderer released');
} else {
console.info('Renderer release failed');
}
}
```
...@@ -54,16 +54,16 @@ await cameraManager.getCameras((err, cameras) => { ...@@ -54,16 +54,16 @@ await cameraManager.getCameras((err, cameras) => {
cameraArray = cameras cameraArray = cameras
}) })
for(let cameraIndex = 0; cameraIndex < cameraArray.length; cameraIndex) { for(let cameraIndex = 0; cameraIndex < cameraArray.length; cameraIndex) {
console.log('cameraId : ' + cameraArray[cameraIndex].cameraId) // Obtain the camera ID. console.log('cameraId : ' + cameraArray[cameraIndex].cameraId) // Obtain the camera ID.
console.log('cameraPosition : ' + cameraArray[cameraIndex].cameraPosition) // Obtain the camera position. console.log('cameraPosition : ' + cameraArray[cameraIndex].cameraPosition) // Obtain the camera position.
console.log('cameraType : ' + cameraArray[cameraIndex].cameraType) // Obtain the camera type. console.log('cameraType : ' + cameraArray[cameraIndex].cameraType) // Obtain the camera type.
console.log('connectionType : ' + cameraArray[cameraIndex].connectionType) // Obtain the camera connection type. console.log('connectionType : ' + cameraArray[cameraIndex].connectionType) // Obtain the camera connection type.
} }
// Create a camera input stream. // Create a camera input stream.
let cameraInput let cameraInput
await cameraManager.createCameraInput(cameraArray[0].cameraId).then((input) => { await cameraManager.createCameraInput(cameraArray[0].cameraId).then((input) => {
console.log('Promise returned with the CameraInput instance'); console.log('Promise returned with the CameraInput instance');
cameraInput = input cameraInput = input
}) })
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册