提交 903347fc 编写于 作者: W wusongqing

update docs against 5573

Signed-off-by: Nwusongqing <wusongqing@huawei.com>
上级 ca56d099
# Audio Overview<a name="EN-US_TOPIC_0000001147055469"></a>
# Audio Overview
You can use APIs provided by the audio module to implement audio-related features, including audio playback and volume management.
>![](../public_sys-resources/icon-note.gif) **NOTE**
>Due to permission issues, the above features are temporarily unavailable for the standard system.
## Basic Concepts
## Basic Concepts<a name="section296512102281"></a>
- **Sampling**
Sampling is a process to obtain discrete-time signals by extracting samples from analog signals in a continuous time domain at a specific interval.
- **Sampling**
- **Sampling rate**
Sampling rate is the number of samples extracted from a continuous signal per second to form a discrete signal. It is measured in Hz. Generally, human hearing range is from 20 Hz to 20 kHz. Common audio sampling rates include 8 kHz, 11.025 kHz, 22.05 kHz, 16 kHz, 37.8 kHz, 44.1 kHz, 48 kHz, 96 kHz, and 192 kHz.
Sampling is a process to obtain discrete-time signals by extracting samples from analog signals in a continuous time domain at a specific interval.
- **Sampling rate**
Sampling rate is the number of samples extracted from a continuous signal per second to form a discrete signal. It is measured in Hz. Generally, human hearing range is from 20 Hz to 20 kHz. Common audio sampling rates include 8 kHz, 11.025 kHz, 22.05 kHz, 16 kHz, 37.8 kHz, 44.1 kHz, 48 kHz, 96 kHz, and 192 kHz.
- **Channel**
Channels refer to different spatial positions where independent audio signals are recorded or played. The number of channels is the number of audio sources used during audio recording, or the number of speakers used for audio playback.
- **Audio frame**
Audio data is in stream form. For the convenience of audio algorithm processing and transmission, it is generally agreed that a data amount in a unit of 2.5 to 60 milliseconds is one audio frame. This unit is called sampling time, and its length is specific to codecs and the application requirements.
- **PCM**
Pulse code modulation \(PCM\) is a method used to digitally represent sampled analog signals. It converts continuous-time analog signals into discrete-time digital signal samples.
- **Channel**
Channels refer to different spatial positions where independent audio signals are recorded or played. The number of channels is the number of audio sources used during audio recording, or the number of speakers used for audio playback.
- **Audio frame**
Audio data is in stream form. For the convenience of audio algorithm processing and transmission, it is generally agreed that a data amount in a unit of 2.5 to 60 milliseconds is one audio frame. This unit is called sampling time, and its length is specific to codecs and the application requirements.
- **PCM**<br>
Pulse code modulation (PCM) is a method used to digitally represent sampled analog signals. It converts continuous-time analog signals into discrete-time digital signal samples.
......@@ -20,8 +20,6 @@ During application development, you are advised to use **on('stateChange')** to
To ensure that the UI thread is not blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions. For more information, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8).
## How to Develop
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.
......@@ -31,7 +29,7 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as
var audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
}
......@@ -58,49 +56,49 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as
In the case of audio interruption, the application may encounter write failures. To avoid such failures, interruption unaware applications can use **audioRenderer.state** to check the renderer state before writing audio data. The applications can obtain more details by subscribing to the audio interruption events. For details, see [InterruptEvent](../reference/apis/js-apis-audio.md#interruptevent9).
```js
audioRenderer.on('interrupt', (interruptEvent) => {
console.info('InterruptEvent Received');
console.info('InterruptType: ' + interruptEvent.eventType);
console.info('InterruptForceType: ' + interruptEvent.forceType);
console.info('AInterruptHint: ' + interruptEvent.hintType);
if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_FORCE) {
switch (interruptEvent.hintType) {
// Force Pause: Action was taken by framework.
// Halt the write calls to avoid data loss.
case audio.InterruptHint.INTERRUPT_HINT_PAUSE:
isPlay = false;
break;
// Force Stop: Action was taken by framework.
// Halt the write calls to avoid data loss.
case audio.InterruptHint.INTERRUPT_HINT_STOP:
isPlay = false;
break;
// Force Duck: Action was taken by framework,
// just notifying the app that volume has been reduced.
case audio.InterruptHint.INTERRUPT_HINT_DUCK:
break;
// Force Unduck: Action was taken by framework,
// just notifying the app that volume has been restored.
case audio.InterruptHint.INTERRUPT_HINT_UNDUCK:
break;
}
} else if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_SHARE) {
switch (interruptEvent.hintType) {
// Share Resume: Action is to be taken by App.
// Resume the force paused stream if required.
case audio.InterruptHint.INTERRUPT_HINT_RESUME:
startRenderer();
break;
// Share Pause: Stream has been interrupted,
// It can choose to pause or play concurrently.
case audio.InterruptHint.INTERRUPT_HINT_PAUSE:
isPlay = false;
pauseRenderer();
break;
}
}
});
audioRenderer.on('interrupt', (interruptEvent) => {
console.info('InterruptEvent Received');
console.info('InterruptType: ' + interruptEvent.eventType);
console.info('InterruptForceType: ' + interruptEvent.forceType);
console.info('AInterruptHint: ' + interruptEvent.hintType);
if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_FORCE) {
switch (interruptEvent.hintType) {
// Force Pause: Action was taken by framework.
// Halt the write calls to avoid data loss.
case audio.InterruptHint.INTERRUPT_HINT_PAUSE:
isPlay = false;
break;
// Force Stop: Action was taken by framework.
// Halt the write calls to avoid data loss.
case audio.InterruptHint.INTERRUPT_HINT_STOP:
isPlay = false;
break;
// Force Duck: Action was taken by framework,
// just notifying the app that volume has been reduced.
case audio.InterruptHint.INTERRUPT_HINT_DUCK:
break;
// Force Unduck: Action was taken by framework,
// just notifying the app that volume has been restored.
case audio.InterruptHint.INTERRUPT_HINT_UNDUCK:
break;
}
} else if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_SHARE) {
switch (interruptEvent.hintType) {
// Share Resume: Action is to be taken by App.
// Resume the force paused stream if required.
case audio.InterruptHint.INTERRUPT_HINT_RESUME:
startRenderer();
break;
// Share Pause: Stream has been interrupted,
// It can choose to pause or play concurrently.
case audio.InterruptHint.INTERRUPT_HINT_PAUSE:
isPlay = false;
pauseRenderer();
break;
}
}
});
```
3. Use **start()** to start audio rendering.
......@@ -178,38 +176,38 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as
5. (Optional) Call **pause()** or **stop()** to pause or stop rendering.
```js
async function pauseRenderer() {
var state = audioRenderer.state;
if (state != audio.AudioState.STATE_RUNNING) {
console.info('Renderer is not running');
return;
}
await audioRenderer.pause();
state = audioRenderer.state;
if (state == audio.AudioState.STATE_PAUSED) {
console.info('Renderer paused');
} else {
console.error('Renderer pause failed');
}
}
async function stopRenderer() {
var state = audioRenderer.state;
if (state != audio.AudioState.STATE_RUNNING || state != audio.AudioState.STATE_PAUSED) {
console.info('Renderer is not running or paused');
return;
}
await audioRenderer.stop();
async function pauseRenderer() {
var state = audioRenderer.state;
if (state != audio.AudioState.STATE_RUNNING) {
console.info('Renderer is not running');
return;
}
await audioRenderer.pause();
state = audioRenderer.state;
if (state == audio.AudioState.STATE_PAUSED) {
console.info('Renderer paused');
} else {
console.error('Renderer pause failed');
}
}
state = audioRenderer.state;
if (state == audio.AudioState.STATE_STOPPED) {
console.info('Renderer stopped');
} else {
console.error('Renderer stop failed');
}
async function stopRenderer() {
var state = audioRenderer.state;
if (state != audio.AudioState.STATE_RUNNING || state != audio.AudioState.STATE_PAUSED) {
console.info('Renderer is not running or paused');
return;
}
await audioRenderer.stop();
state = audioRenderer.state;
if (state == audio.AudioState.STATE_STOPPED) {
console.info('Renderer stopped');
} else {
console.error('Renderer stop failed');
}
}
```
......@@ -218,22 +216,20 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as
**AudioRenderer** uses a large number of system resources. Therefore, ensure that the resources are released after the task is complete.
```js
async function releaseRenderer() {
if (state_ == RELEASED || state_ == NEW) {
console.info('Resourced already released');
return;
}
await audioRenderer.release();
state = audioRenderer.state;
if (state == STATE_RELEASED) {
console.info('Renderer released');
} else {
console.info('Renderer release failed');
}
}
```
async function releaseRenderer() {
if (state_ == RELEASED || state_ == NEW) {
console.info('Resourced already released');
return;
}
await audioRenderer.release();
state = audioRenderer.state;
if (state == STATE_RELEASED) {
console.info('Renderer released');
} else {
console.info('Renderer release failed');
}
}
```
......@@ -54,16 +54,16 @@ await cameraManager.getCameras((err, cameras) => {
cameraArray = cameras
})
for(let cameraIndex = 0; cameraIndex < cameraArray.length; cameraIndex) {
console.log('cameraId : ' + cameraArray[cameraIndex].cameraId) // Obtain the camera ID.
console.log('cameraPosition : ' + cameraArray[cameraIndex].cameraPosition) // Obtain the camera position.
console.log('cameraType : ' + cameraArray[cameraIndex].cameraType) // Obtain the camera type.
console.log('connectionType : ' + cameraArray[cameraIndex].connectionType) // Obtain the camera connection type.
}
// Create a camera input stream.
let cameraInput
await cameraManager.createCameraInput(cameraArray[0].cameraId).then((input) => {
for(let cameraIndex = 0; cameraIndex < cameraArray.length; cameraIndex) {
console.log('cameraId : ' + cameraArray[cameraIndex].cameraId) // Obtain the camera ID.
console.log('cameraPosition : ' + cameraArray[cameraIndex].cameraPosition) // Obtain the camera position.
console.log('cameraType : ' + cameraArray[cameraIndex].cameraType) // Obtain the camera type.
console.log('connectionType : ' + cameraArray[cameraIndex].connectionType) // Obtain the camera connection type.
}
// Create a camera input stream.
let cameraInput
await cameraManager.createCameraInput(cameraArray[0].cameraId).then((input) => {
console.log('Promise returned with the CameraInput instance');
cameraInput = input
})
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册