提交 903347fc 编写于 作者: W wusongqing

update docs against 5573

Signed-off-by: Nwusongqing <wusongqing@huawei.com>
上级 ca56d099
# Audio Overview<a name="EN-US_TOPIC_0000001147055469"></a> # Audio Overview
You can use APIs provided by the audio module to implement audio-related features, including audio playback and volume management. You can use APIs provided by the audio module to implement audio-related features, including audio playback and volume management.
>![](../public_sys-resources/icon-note.gif) **NOTE** ## Basic Concepts
>Due to permission issues, the above features are temporarily unavailable for the standard system.
## Basic Concepts<a name="section296512102281"></a>
- **Sampling** - **Sampling**
Sampling is a process to obtain discrete-time signals by extracting samples from analog signals in a continuous time domain at a specific interval. Sampling is a process to obtain discrete-time signals by extracting samples from analog signals in a continuous time domain at a specific interval.
- **Sampling rate** - **Sampling rate**
Sampling rate is the number of samples extracted from a continuous signal per second to form a discrete signal. It is measured in Hz. Generally, human hearing range is from 20 Hz to 20 kHz. Common audio sampling rates include 8 kHz, 11.025 kHz, 22.05 kHz, 16 kHz, 37.8 kHz, 44.1 kHz, 48 kHz, 96 kHz, and 192 kHz. Sampling rate is the number of samples extracted from a continuous signal per second to form a discrete signal. It is measured in Hz. Generally, human hearing range is from 20 Hz to 20 kHz. Common audio sampling rates include 8 kHz, 11.025 kHz, 22.05 kHz, 16 kHz, 37.8 kHz, 44.1 kHz, 48 kHz, 96 kHz, and 192 kHz.
- **Channel** - **Channel**
Channels refer to different spatial positions where independent audio signals are recorded or played. The number of channels is the number of audio sources used during audio recording, or the number of speakers used for audio playback. Channels refer to different spatial positions where independent audio signals are recorded or played. The number of channels is the number of audio sources used during audio recording, or the number of speakers used for audio playback.
- **Audio frame** - **Audio frame**
Audio data is in stream form. For the convenience of audio algorithm processing and transmission, it is generally agreed that a data amount in a unit of 2.5 to 60 milliseconds is one audio frame. This unit is called sampling time, and its length is specific to codecs and the application requirements. Audio data is in stream form. For the convenience of audio algorithm processing and transmission, it is generally agreed that a data amount in a unit of 2.5 to 60 milliseconds is one audio frame. This unit is called sampling time, and its length is specific to codecs and the application requirements.
- **PCM** - **PCM**<br>
Pulse code modulation (PCM) is a method used to digitally represent sampled analog signals. It converts continuous-time analog signals into discrete-time digital signal samples.
Pulse code modulation \(PCM\) is a method used to digitally represent sampled analog signals. It converts continuous-time analog signals into discrete-time digital signal samples.
...@@ -20,8 +20,6 @@ During application development, you are advised to use **on('stateChange')** to ...@@ -20,8 +20,6 @@ During application development, you are advised to use **on('stateChange')** to
To ensure that the UI thread is not blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions. For more information, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8). To ensure that the UI thread is not blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions. For more information, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8).
## How to Develop ## How to Develop
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance. 1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.
...@@ -235,5 +233,3 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as ...@@ -235,5 +233,3 @@ To ensure that the UI thread is not blocked, most **AudioRenderer** calls are as
} }
``` ```
...@@ -54,16 +54,16 @@ await cameraManager.getCameras((err, cameras) => { ...@@ -54,16 +54,16 @@ await cameraManager.getCameras((err, cameras) => {
cameraArray = cameras cameraArray = cameras
}) })
for(let cameraIndex = 0; cameraIndex < cameraArray.length; cameraIndex) { for(let cameraIndex = 0; cameraIndex < cameraArray.length; cameraIndex) {
console.log('cameraId : ' + cameraArray[cameraIndex].cameraId) // Obtain the camera ID. console.log('cameraId : ' + cameraArray[cameraIndex].cameraId) // Obtain the camera ID.
console.log('cameraPosition : ' + cameraArray[cameraIndex].cameraPosition) // Obtain the camera position. console.log('cameraPosition : ' + cameraArray[cameraIndex].cameraPosition) // Obtain the camera position.
console.log('cameraType : ' + cameraArray[cameraIndex].cameraType) // Obtain the camera type. console.log('cameraType : ' + cameraArray[cameraIndex].cameraType) // Obtain the camera type.
console.log('connectionType : ' + cameraArray[cameraIndex].connectionType) // Obtain the camera connection type. console.log('connectionType : ' + cameraArray[cameraIndex].connectionType) // Obtain the camera connection type.
} }
// Create a camera input stream. // Create a camera input stream.
let cameraInput let cameraInput
await cameraManager.createCameraInput(cameraArray[0].cameraId).then((input) => { await cameraManager.createCameraInput(cameraArray[0].cameraId).then((input) => {
console.log('Promise returned with the CameraInput instance'); console.log('Promise returned with the CameraInput instance');
cameraInput = input cameraInput = input
}) })
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册