提交 8e8b1cb2 编写于 作者: G Gloria

Update docs against 17754

Signed-off-by: wusongqing<wusongqing@huawei.com>
上级 c4647178
......@@ -17,11 +17,14 @@ Before developing an audio feature, especially before implementing audio data pr
Before developing features related to audio and video playback, you are advised to understand the following concepts:
- Playback process: network protocol > container format > audio and video codec > graphics/audio rendering
- Network protocols: HLS, HTTP, HTTPS, and more
- Container formats: MP4, MKV, MPEG-TS, WebM, and more
- Encoding formats: H.263/H.264/H.265, MPEG4/MPEG2, and more
## Introduction to Audio Streams
## Introduction to Audio Streams
An audio stream is an independent audio data processing unit that has a specific audio format and audio usage scenario information. The audio stream can be used in playback and recording scenarios, and supports independent volume adjustment and audio device routing.
......@@ -29,20 +32,20 @@ The basic audio stream information is defined by [AudioStreamInfo](../reference/
### Audio Stream Usage Scenario Information
In addition to the basic information (which describes only audio data), an audio stream has usage scenario information. This is because audio streams differ in the volume, device routing, and concurrency policy. The system chooses an appropriate processing policy for an audio stream based on the usage scenario information, thereby delivering the optimal user experience.
In addition to the basic information (which describes only audio data), an audio stream has usage scenario information. This is because audio streams differ in the volume, device routing, and concurrency policy. The system chooses an appropriate processing policy for an audio stream based on the usage scenario information, thereby delivering better user experience.
- Playback scenario
Information about the audio playback scenario is defined by using [StreamUsage](../reference/apis/js-apis-audio.md#streamusage) and [ContentType](../reference/apis/js-apis-audio.md#contenttype).
- **StreamUsage** specifies the usage type of an audio stream, for example, used for media, voice communication, voice assistant, notification, and ringtone.
- **ContentType** specifies the content type of data in an audio stream, for example, speech, music, movie, notification tone, and ringtone.
Information about the audio playback scenario is defined by using [StreamUsage](../reference/apis/js-apis-audio.md#streamusage) and [ContentType](../reference/apis/js-apis-audio.md#contenttype).
- **StreamUsage** specifies the usage type of an audio stream, for example, used for media, voice communication, voice assistant, notification, and ringtone.
- **ContentType** specifies the content type of data in an audio stream, for example, speech, music, movie, notification tone, and ringtone.
- Recording scenario
Information about the audio stream recording scenario is defined by [SourceType](../reference/apis/js-apis-audio.md#sourcetype8).
Information about the audio stream recording scenario is defined by [SourceType](../reference/apis/js-apis-audio.md#sourcetype8).
**SourceType** specifies the recording source type of an audio stream, including the mic source, voice recognition source, and voice communication source.
## Supported Audio Formats
......@@ -58,9 +61,9 @@ The sampling rate varies according to the device type.
- Mono and stereo are supported. For details, see [AudioChannel](../reference/apis/js-apis-audio.md#audiochannel8).
- The following sampling formats are supported: U8 (unsigned 8-bit integer), S16LE (signed 16-bit integer, little endian), S24LE (signed 24-bit integer, little endian), S32LE (signed 32-bit integer, little endian), and F32LE (signed 32-bit floating point number, little endian). For details, see [AudioSampleFormat](../reference/apis/js-apis-audio.md#audiosampleformat8).
Due to system restrictions, only some devices support the sampling formats S24LE, S32LE, and F32LE.
Due to system restrictions, only some devices support the sampling formats S24LE, S32LE, and F32LE.
Little endian means that the most significant byte is stored at the largest memory address and the least significant byte of data is stored at the smallest. This storage mode effectively combines the memory address with the bit weight of the data. Specifically, the largest memory address has a high weight, and the smallest memory address has a low weight.
The audio and video formats supported by the APIs of the media module are described in [AVPlayer and AVRecorder](avplayer-avrecorder-overview.md).
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册