提交 d40937a8 编写于 作者: G Gloria

Update docs against 19919+20541+20074+20114+19974+20217+20175+20633

Signed-off-by: wusongqing<wusongqing@huawei.com>
上级 82c2f0f1
......@@ -10,6 +10,7 @@
- [Using AudioRenderer for Audio Playback](using-audiorenderer-for-playback.md)
- [Using OpenSL ES for Audio Playback](using-opensl-es-for-playback.md)
- [Using TonePlayer for Audio Playback (for System Applications Only)](using-toneplayer-for-playback.md)
- [Using OHAudio for Audio Playback](using-ohaudio-for-playback.md)
- [Audio Playback Concurrency Policy](audio-playback-concurrency.md)
- [Volume Management](volume-management.md)
- [Audio Effect Management](audio-effect-management.md)
......@@ -21,6 +22,7 @@
- [Using AVRecorder for Audio Recording](using-avrecorder-for-recording.md)
- [Using AudioCapturer for Audio Recording](using-audiocapturer-for-recording.md)
- [Using OpenSL ES for Audio Recording](using-opensl-es-for-recording.md)
- [Using OHAudio for Audio Recording](using-ohaudio-for-recording.md)
- [Microphone Management](mic-management.md)
- [Audio Recording Stream Management](audio-recording-stream-management.md)
- [Audio Input Device Management](audio-input-device-management.md)
......@@ -29,6 +31,14 @@
- [Developing Audio Call](audio-call-development.md)
- [Video Playback](video-playback.md)
- [Video Recording](video-recording.md)
- Audio and Video Codecs
- [Obtaining Supported Codecs](obtain-supported-codecs.md)
- [Audio Encoding](audio-encoding.md)
- [Audio Decoding](audio-decoding.md)
- [Video Encoding](video-encoding.md)
- [Video Decoding](video-decoding.md)
- [Audio/Video Encapsulation](audio-video-encapsulation.md)
- [Audio/Video Decapsulation](audio-video-decapsulation.md)
- AVSession
- [AVSession Overview](avsession-overview.md)
- Local AVSession
......
# Audio Decoding
You can call the native APIs provided by the **AudioDecoder** module to decode audio, that is, to decode media data into PCM streams.
Currently, the following decoding capabilities are supported:
| Container Specification| Audio Decoding Type |
| -------- | :--------------------------- |
| mp4 | AAC, MPEG (MP3), FLAC, Vorbis|
| m4a | AAC |
| flac | FLAC |
| ogg | Vorbis |
| aac | AAC |
| mp3 | MPEG (MP3) |
**Usage Scenario**
- Audio playback
Decode audio and transmit the data to the speaker for playing.
- Audio rendering
Decode audio and transmit the data to the audio processing module for audio rendering.
- Audio editing
Decode audio and transmit the data for audio editing (for example, adjusting the playback speed of a channel). Audio editing is performed based on PCM streams.
## How to Develop
Read [AudioDecoder](../reference/native-apis/_audio_decoder.md) for the API reference.
Refer to the code snippet below to complete the entire audio decoding process, including creating a decoder, setting decoding parameters (such as the sampling rate, bit rate, and number of audio channels), and starting, refreshing, resetting, and destroying the decoder.
During application development, you must call the APIs in the defined sequence. Otherwise, an exception or undefined behavior may occur.
For details about the complete code, see [Sample](https://gitee.com/openharmony/multimedia_av_codec/blob/master/test/nativedemo/audio_demo/avcodec_audio_decoder_demo.cpp).
The figure below shows the call relationship of audio decoding.
![Call relationship of audio decoding](figures/audio-decode.png)
1. Create a decoder instance.
```cpp
// Create a decoder by name.
OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_AUDIO_MPEG, false);
const char *name = OH_AVCapability_GetName(capability);
OH_AVCodec *audioDec = OH_AudioDecoder_CreateByName(name);
```
```cpp
// Create a decoder by MIME type.
OH_AVCodec *audioDec = OH_AudioDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_AUDIO_MPEG);
```
```cpp
// Initialize the queues.
class ADecSignal {
public:
std::mutex inMutex_;
std::mutex outMutex_;
std::mutex startMutex_;
std::condition_variable inCond_;
std::condition_variable outCond_;
std::condition_variable startCond_;
std::queue<uint32_t> inQueue_;
std::queue<uint32_t> outQueue_;
std::queue<OH_AVMemory *> inBufferQueue_;
std::queue<OH_AVMemory *> outBufferQueue_;
std::queue<OH_AVCodecBufferAttr> attrQueue_;
};
ADecSignal *signal_;
```
2. Call **OH_AudioDecoder_SetCallback()** to set callback functions.
Register the **OH_AVCodecAsyncCallback** struct that defines the following callback function pointers:
- **OnError**, a callback used to report a codec operation error
- **OnOutputFormatChanged**, a callback used to report a codec stream change, for example, audio channel change
- **OnInputBufferAvailable**, a callback used to report input data required, which means that the decoder is ready for receiving data
- **OnOutputBufferAvailable**, a callback used to report output data generated, which means that decoding is complete
You need to process the callback functions to ensure that the decoder runs properly.
```cpp
// Set the OnError callback function.
static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData)
{
(void)codec;
(void)errorCode;
(void)userData;
}
// Set the OnOutputFormatChanged callback function.
static void OnOutputFormatChanged(OH_AVCodec *codec, OH_AVFormat *format, void*userData)
{
(void)codec;
(void)format;
(void)userData;
}
// Set the OnInputBufferAvailable callback function, which is used to send the input stream to the InputBuffer queue.
static void OnInputBufferAvailable(OH_AVCodec *codec, uint32_t index, OH_AVMemory*data, void *userData)
{
(void)codec;
ADecSignal *signal = static_cast<ADecSignal *>(userData);
unique_lock<mutex> lock(signal->inMutex_);
signal->inQueue_.push(index);
signal->inBufferQueue_.push(data);
signal->inCond_.notify_all();
// The input stream is sent to the InputBuffer queue.
}
// Set the OnOutputBufferAvailable callback function, which is used to send the PCM stream obtained after decoding to the OutputBuffer queue.
static void OnOutputBufferAvailable(OH_AVCodec *codec, uint32_t index, OH_AVMemory*data, OH_AVCodecBufferAttr *attr,
void *userData)
{
(void)codec;
ADecSignal *signal = static_cast<ADecSignal *>(userData);
unique_lock<mutex> lock(signal->outMutex_);
signal->outQueue_.push(index);
signal->outBufferQueue_.push(data);
if (attr) {
signal->attrQueue_.push(*attr);
}
signal->outCond_.notify_all();
// The index of the output buffer is sent to OutputQueue_.
// The decoded data is sent to the OutputBuffer queue.
}
signal_ = new ADecSignal();
OH_AVCodecAsyncCallback cb = {&OnError, &OnOutputFormatChanged, OnInputBufferAvailable, &OnOutputBufferAvailable};
// Set the asynchronous callbacks.
int32_t ret = OH_AudioDecoder_SetCallback(audioDec, cb, signal_);
if (ret != AV_ERR_OK) {
// Exception handling.
}
```
3. Call **OH_AudioDecoder_Configure()** to configure the decoder.
The following options are mandatory: sampling rate, bit rate, and number of audio channels. The maximum input length is optional.
- For AAC decoding, the parameter that specifies whether the data type is Audio Data Transport Stream (ADTS) must be specified. If this parameter is not specified, the data type is considered as Low Overhead Audio Transport Multiplex (LATM).
- For Vorbis decoding, the ID header and setup header must also be specified.
```cpp
enum AudioFormatType : int32_t {
TYPE_AAC = 0,
TYPE_FLAC = 1,
TYPE_MP3 = 2,
TYPE_VORBIS = 3,
};
// Set the decoding resolution.
int32_t ret;
// (Mandatory) Configure the audio sampling rate.
constexpr uint32_t DEFAULT_SMAPLERATE = 44100;
// (Mandatory) Configure the audio bit rate.
constexpr uint32_t DEFAULT_BITRATE = 32000;
// (Mandatory) Configure the number of audio channels.
constexpr uint32_t DEFAULT_CHANNEL_COUNT = 2;
// (Optional) Configure the maximum input length.
constexpr uint32_t DEFAULT_MAX_INPUT_SIZE = 1152;
OH_AVFormat *format = OH_AVFormat_Create();
// Set the format.
OH_AVFormat_SetIntValue(format, MediaDescriptionKey::MD_KEY_SAMPLE_RATE.data(),DEFAULT_SMAPLERATE);
OH_AVFormat_SetIntValue(format, MediaDescriptionKey::MD_KEY_BITRATE.data(),DEFAULT_BITRATE);
OH_AVFormat_SetIntValue(format, MediaDescriptionKey::MD_KEY_CHANNEL_COUNT.data(),DEFAULT_CHANNEL_COUNT);
OH_AVFormat_SetIntValue(format, MediaDescriptionKey::MD_KEY_MAX_INPUT_SIZE.data(),DEFAULT_MAX_INPUT_SIZE);
if (audioType == TYPE_AAC) {
OH_AVFormat_SetIntValue(format, MediaDescriptionKey::MD_KEY_AAC_IS_ADTS.data(), DEFAULT_AAC_TYPE);
}
if (audioType == TYPE_VORBIS) {
OH_AVFormat_SetStringValue(format, MediaDescriptionKey::MD_KEY_IDENTIFICATION_HEADER.data(), DEFAULT_ID_HEADER);
OH_AVFormat_SetStringValue(format, MediaDescriptionKey::MD_KEY_SETUP_HEADER.data(), DEFAULT_SETUP_HEADER);
}
// Configure the decoder.
ret = OH_AudioDecoder_Configure(audioDec, format);
if (ret != AV_ERR_OK) {
// Exception handling.
}
```
4. Call **OH_AudioDecoder_Prepare()** to prepare internal resources for the decoder.
```cpp
ret = OH_AudioDecoder_Prepare(audioDec);
if (ret != AV_ERR_OK) {
// Exception handling.
}
```
5. Call **OH_AudioDecoder_Start()** to start the decoder.
```c++
inputFile_ = std::make_unique<std::ifstream>();
// Open the path of the binary file to be decoded.
inputFile_->open(inputFilePath.data(), std::ios::in | std::ios::binary);
// Configure the path of the output file.
outFile_ = std::make_unique<std::ofstream>();
outFile_->open(outputFilePath.data(), std::ios::out | std::ios::binary);
// Start decoding.
ret = OH_AudioDecoder_Start(audioDec);
if (ret != AV_ERR_OK) {
// Exception handling.
}
```
6. Call **OH_AudioDecoder_PushInputData()** to write the data to decode.
To indicate the End of Stream (EOS), pass in the **AVCODEC_BUFFER_FLAGS_EOS** flag.
```c++
// Configure the buffer information.
OH_AVCodecBufferAttr info;
// Set the package size, offset, and timestamp.
info.size = pkt_->size;
info.offset = 0;
info.pts = pkt_->pts;
info.flags = AVCODEC_BUFFER_FLAGS_CODEC_DATA;
auto buffer = signal_->inBufferQueue_.front();
if (inputFile_->eof()){
info.size = 0;
info.flags = AVCODEC_BUFFER_FLAGS_EOS;
}else{
inputFile_->read((char *)OH_AVMemory_GetAddr(buffer), INPUT_FRAME_BYTES);
}
uint32_t index = signal_->inQueue_.front();
// Send the data to the input queue for decoding. The index is the subscript of the queue.
int32_t ret = OH_AudioDecoder_PushInputData(audioDec, index, info);
if (ret != AV_ERR_OK) {
// Exception handling.
}
```
7. Call **OH_AudioDecoder_FreeOutputData()** to output decoded PCM streams.
```c++
OH_AVCodecBufferAttr attr = signal_->attrQueue_.front();
OH_AVMemory *data = signal_->outBufferQueue_.front();
uint32_t index = signal_->outQueue_.front();
// Write the decoded data (specified by data) to the output file.
outFile_->write(reinterpret_cast<char *>(OH_AVMemory_GetAddr(data)), attr.size);
// Free the buffer that stores the data.
ret = OH_AudioDecoder_FreeOutputData(audioDec, index);
if (ret != AV_ERR_OK) {
// Exception handling.
}
```
8. (Optional) Call **OH_AudioDecoder_Flush()** to refresh the decoder.
After **OH_AudioDecoder_Flush()** is called, the decoder remains in the running state, but the current queue is cleared and the buffer storing the decoded data is freed. To continue decoding, you must call **OH_AudioDecoder_Start()** again.
You need to call **OH_AudioDecoder_Start()** in the following cases:
* The EOS of the file is reached.
* An error with **OH_AudioDecoder_IsValid** set to **true** (indicating that the execution can continue) occurs.
```c++
// Refresh the decoder.
ret = OH_AudioDecoder_Flush(audioDec);
if (ret != AV_ERR_OK) {
// Exception handling.
}
// Start decoding again.
ret = OH_AudioDecoder_Start(audioDec);
if (ret != AV_ERR_OK) {
// Exception handling.
}
```
9. (Optional) Call **OH_AudioDecoder_Reset()** to reset the decoder.
After **OH_AudioDecoder_Reset()** is called, the decoder returns to the initialized state. To continue decoding, you must call **OH_AudioDecoder_Configure()** and then **OH_AudioDecoder_Start()**.
```c++
// Reset the decoder.
ret = OH_AudioDecoder_Reset(audioDec);
if (ret != AV_ERR_OK) {
// Exception handling.
}
// Reconfigure the decoder.
ret = OH_AudioDecoder_Configure(audioDec, format);
if (ret != AV_ERR_OK) {
// Exception handling.
}
```
10. Call **OH_AudioDecoder_Stop()** to stop the decoder.
```c++
// Stop the decoder.
ret = OH_AudioDecoder_Stop(audioDec);
if (ret != AV_ERR_OK) {
// Exception handling.
}
return ret;
```
11. Call **OH_AudioDecoder_Destroy()** to destroy the decoder instance and release resources.
**NOTE**: You only need to call this API once.
```c++
// Call OH_AudioDecoder_Destroy to destroy the decoder.
ret = OH_AudioDecoder_Destroy(audioDec);
if (ret != AV_ERR_OK) {
// Exception handling.
} else {
audioEnc = NULL; // The decoder cannot be destroyed repeatedly.
}
return ret;
```
\ No newline at end of file
# Audio Encoding
You can call the native APIs provided by the **AudioEncoder** module to encode audio, that is, to compress audio PCM data into a desired format.
PCM data can be from any source. For example, you can use a microphone to record audio data or import edited PCM data. After audio encoding, you can output streams in the desired format and encapsulate the streams into a target file.
Currently, the following encoding capabilities are supported:
| Container Specification| Audio Encoding Type|
| -------- | :----------- |
| mp4 | AAC, FLAC |
| m4a | AAC |
| flac | FLAC |
| aac | AAC |
**Usage Scenario**
- Audio recording
Record and transfer PCM data, and encode the data into streams in the desired format.
- Audio editing
Import edited PCM data, and encode the data into streams in the desired format.
## How to Develop
Read [AudioEncoder](../reference/native-apis/_audio_encoder.md) for the API reference.
Refer to the code snippet below to complete the entire audio encoding process, including creating an encoder, setting encoding parameters (such as the sampling rate, bit rate, and number of audio channels), and starting, refreshing, resetting, and destroying the encoder.
During application development, you must call the APIs in the defined sequence. Otherwise, an exception or undefined behavior may occur.
For details about the complete code, see [Sample](https://gitee.com/openharmony/multimedia_av_codec/blob/master/test/nativedemo/audio_demo/avcodec_audio_aac_encoder_demo.cpp).
The figure below shows the call relationship of audio encoding.
![Call relationship of audio encoding](figures/audio-encode.png)
1. Create an encoder instance.
You can create an encoder by name or MIME type.
```cpp
// Create an encoder by name.
OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_AUDIO_AAC, true);
const char *name = OH_AVCapability_GetName(capability);
OH_AVCodec *audioEnc = OH_AudioEncoder_CreateByName(name);
```
```cpp
// Create an encoder by MIME type.
OH_AVCodec *audioEnc = OH_AudioEncoder_CreateByMime(OH_AVCODEC_MIMETYPE_AUDIO_AAC);
```
```cpp
// Initialize the queues.
class AEncSignal {
public:
std::mutex inMutex_;
std::mutex outMutex_;
std::mutex startMutex_;
std::condition_variable inCond_;
std::condition_variable outCond_;
std::condition_variable startCond_;
std::queue<uint32_t> inQueue_;
std::queue<uint32_t> outQueue_;
std::queue<OH_AVMemory *> inBufferQueue_;
std::queue<OH_AVMemory *> outBufferQueue_;
std::queue<OH_AVCodecBufferAttr> attrQueue_;
};
AEncSignal *signal_ = new AEncSignal();
```
2. Call **OH_AudioEncoder_SetCallback()** to set callback functions.
Register the **OH_AVCodecAsyncCallback** struct that defines the following callback function pointers:
- **OnError**, a callback used to report a codec operation error
- **OnOutputFormatChanged**, a callback used to report a codec stream change, for example, audio channel change
- **OnInputBufferAvailable**, a callback used to report input data required, which means that the encoder is ready for receiving PCM data
- **OnOutputBufferAvailable**, a callback used to report output data generated, which means that encoding is complete
You need to process the callback functions to ensure that the encoder runs properly.
```cpp
// Set the OnError callback function.
static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData)
{
(void)codec;
(void)errorCode;
(void)userData;
}
// Set the OnOutputFormatChanged callback function.
static void OnOutputFormatChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData)
{
(void)codec;
(void)format;
(void)userData;
}
// Set the OnInputBufferAvailable callback function, which is used to send the input PCM data to the InputBuffer queue.
static void OnInputBufferAvailable(OH_AVCodec *codec, uint32_t index, OH_AVMemory *data, void *userData)
{
(void)codec;
// The input stream is sent to the InputBuffer queue.
AEncSignal *signal = static_cast<AEncSignal *>(userData);
cout << "OnInputBufferAvailable received, index:" << index << endl;
unique_lock<mutex> lock(signal->inMutex_);
signal->inQueue_.push(index);
signal->inBufferQueue_.push(data);
signal->inCond_.notify_all();
}
// Set the OnOutputBufferAvailable callback function, which is used to send the encoded stream to the OutputBuffer queue.
static void OnOutputBufferAvailable(OH_AVCodec *codec, uint32_t index, OH_AVMemory *data, OH_AVCodecBufferAttr *attr,
void *userData)
{
(void)codec;
// The index of the output buffer is sent to the OutputQueue.
// The encoded data is sent to the OutputBuffer queue.
AEncSignal *signal = static_cast<AEncSignal *>(userData);
unique_lock<mutex> lock(signal->outMutex_);
signal->outQueue_.push(index);
signal->outBufferQueue_.push(data);
if (attr) {
signal->attrQueue_.push(*attr);
}
}
OH_AVCodecAsyncCallback cb = {&OnError, &OnOutputFormatChanged, &OnInputBufferAvailable, &OnOutputBufferAvailable};
// Set the asynchronous callbacks.
int32_t ret = OH_AudioEncoder_SetCallback(audioEnc, cb, userData);
```
3. Call **OH_AudioEncoder_Configure** to configure the encoder.
The following options are mandatory: sampling rate, bit rate, number of audio channels, audio channel type, and bit depth. The maximum input length is optional.
For FLAC encoding, the compliance level and sampling precision are also mandatory.
```cpp
enum AudioFormatType : int32_t {
TYPE_AAC = 0,
TYPE_FLAC = 1,
};
int32_t ret;
// (Mandatory) Configure the audio sampling rate.
constexpr uint32_t DEFAULT_SMAPLERATE = 44100;
// (Mandatory) Configure the audio bit rate.
constexpr uint32_t DEFAULT_BITRATE = 32000;
// (Mandatory) Configure the number of audio channels.
constexpr uint32_t DEFAULT_CHANNEL_COUNT = 2;
// (Mandatory) Configure the audio channel type.
constexpr AudioChannelLayout CHANNEL_LAYOUT =AudioChannelLayout::STEREO;
// (Mandatory) Configure the audio bit depth. Only SAMPLE_S16LE and SAMPLE_S32LE are available for FLAC encoding.
constexpr OH_BitsPerSample SAMPLE_FORMAT =OH_BitsPerSample::SAMPLE_S32LE;
// (Mandatory) Configure the audio bit depth. Only SAMPLE_S32P is available for AAC encoding.
constexpr OH_BitsPerSample SAMPLE_AAC_FORMAT =OH_BitsPerSample::SAMPLE_S32P;
// Configure the audio compliance level. The default value is 0, and the value ranges from -2 to 2.
constexpr int32_t COMPLIANCE_LEVEL = 0;
// (Mandatory) Configure the audio sampling precision. SAMPLE_S16LE, SAMPLE_S24LE, and SAMPLE_S32LE are available.
constexpr BITS_PER_CODED_SAMPLE BITS_PER_CODED_SAMPLE =OH_BitsPerSample::SAMPLE_S24LE;
// (Optional) Configure the maximum input length.
constexpr uint32_t DEFAULT_MAX_INPUT_SIZE = 1024*DEFAULT_CHANNEL_COUNT *sizeof(float);//aac
OH_AVFormat *format = OH_AVFormat_Create();
// Set the format.
OH_AVFormat_SetIntValue(format,MediaDescriptionKey::MD_KEY_SAMPLE_RATE.data(),DEFAULT_SMAPLERATE);
OH_AVFormat_SetIntValue(format,MediaDescriptionKey::MD_KEY_BITRATE.data(), DEFAULT_BITRATE);
OH_AVFormat_SetIntValue(format,MediaDescriptionKey::MD_KEY_CHANNEL_COUNT.data(),DEFAULT_CHANNEL_COUNT);
OH_AVFormat_SetIntValue(format,MediaDescriptionKey::MD_KEY_MAX_INPUT_SIZE.data(),DEFAULT_MAX_INPUT_SIZE);
OH_AVFormat_SetLongValue(format,MediaDescriptionKey::MD_KEY_CHANNEL_LAYOUT.data(),CHANNEL_LAYOUT);
OH_AVFormat_SetIntValue(format,MediaDescriptionKey::MD_KEY_AUDIO_SAMPLE_FORMAT.data(),SAMPLE_FORMAT);
if(audioType == TYPE_AAC){
OH_AVFormat_SetIntValue(format, MediaDescriptionKey::MD_KEY_AUDIO_SAMPLE_FORMAT.data(), SAMPLE_AAC_FORMAT);
}
if (audioType == TYPE_FLAC) {
OH_AVFormat_SetIntValue(format, MediaDescriptionKey::MD_KEY_BITS_PER_CODED_SAMPLE.data(), BITS_PER_CODED_SAMPLE);
OH_AVFormat_SetLongValue(format, MediaDescriptionKey::MD_KEY_COMPLIANCE_LEVEL.data(), COMPLIANCE_LEVEL);
}
// Configure the encoder.
ret = OH_AudioEncoder_Configure(audioEnc, format);
if (ret != AV_ERR_OK) {
// Exception handling.
}
```
4. Call **OH_AudioEncoder_Prepare()** to prepare internal resources for the encoder.
```c++
OH_AudioEncoder_Prepare(audioEnc);
```
5. Call **OH_AudioEncoder_Start()** to start the encoder.
```c++
inputFile_ = std::make_unique<std::ifstream>();
// Open the path of the binary file to be encoded.
inputFile_->open(inputFilePath.data(), std::ios::in |std::ios::binary);
// Configure the path of the output file.
outFile_ = std::make_unique<std::ofstream>();
outFile_->open(outputFilePath.data(), std::ios::out |std::ios::binary);
// Start encoding.
ret = OH_AudioEncoder_Start(audioEnc);
if (ret != AV_ERR_OK) {
// Exception handling.
}
```
6. Call **OH_AudioEncoder_PushInputData()** to write the data to encode.
To indicate the End of Stream (EOS), pass in the **AVCODEC_BUFFER_FLAGS_EOS** flag.
For AAC encoding, **FRAME_SIZE** (number of sampling points) is fixed at **1024**.
For FLAC encoding, set **FRAME_SIZE** based on the table below.
| Sampling Rate| FRAME_SIZE|
| :----: | :----: |
| 8000 | 576 |
| 16000 | 1152 |
| 22050 | 2304 |
| 24000 | 2304 |
| 32000 | 2304 |
| 44100 | 4608 |
| 48000 | 4608 |
| 88200 | 8192 |
| 96000 | 8192 |
**NOTE**: If **FRAME_SIZE** is not set to **1024** for AAC encoding, an error code is returned. In the case of FLAC encoding, if **FRAME_SIZE** is set to a value greater than the value listed in the table for a given sampling rate, an error code is returned; if **FRAME_SIZE** is set to a value less than the value listed, the encoded file may be damaged.
```c++
constexpr int32_t FRAME_SIZE = 1024; // AAC encoding
constexpr int32_t DEFAULT_CHANNEL_COUNT =2;
constexpr int32_t INPUT_FRAME_BYTES = DEFAULT_CHANNEL_COUNT * FRAME_SIZE * sizeof(float); // AAC encoding
// Configure the buffer information.
OH_AVCodecBufferAttr info;
// Set the package size, offset, and timestamp.
info.size = pkt_->size;
info.offset = 0;
info.pts = pkt_->pts;
info.flags = AVCODEC_BUFFER_FLAGS_CODEC_DATA;
auto buffer = signal_->inBufferQueue_.front();
if (inputFile_->eof()){
info.size = 0;
info.flags = AVCODEC_BUFFER_FLAGS_EOS;
}else{
inputFile_->read((char *)OH_AVMemory_GetAddr(buffer), INPUT_FRAME_BYTES);
}
uint32_t index = signal_->inQueue_.front();
// Send the data to the input queue for encoding. The index is the subscript of the queue.
int32_t ret = OH_AudioEncoder_PushInputData(audioEnc, index,info);
if (ret != AV_ERR_OK) {
// Exception handling.
}
```
7. Call **OH_AudioEncoder_FreeOutputData()** to output the encoded stream.
```c++
OH_AVCodecBufferAttr attr = signal_->attrQueue_.front();
OH_AVMemory *data = signal_->outBufferQueue_.front();
uint32_t index = signal_->outQueue_.front();
// Write the encoded data (specified by data) to the output file.
outFile_->write(reinterpret_cast<char *>(OH_AVMemory_GetAdd(data)), attr.size);
// Release the output buffer.
ret = OH_AudioEncoder_FreeOutputData(audioEnc, index);
if (ret != AV_ERR_OK) {
// Exception handling.
}
if (attr.flags == AVCODEC_BUFFER_FLAGS_EOS) {
cout << "decode eos" << endl;
isRunning_.store(false);
break;
}
```
8. (Optional) Call **OH_AudioEncoder_Flush()** to refresh the encoder.
After **OH_AudioEncoder_Flush()** is called, the current encoding queue is cleared.
To continue encoding, you must call **OH_AudioEncoder_Start()** again.
You need to call **OH_AudioEncoder_Flush()** in the following cases:
* The EOS of the file is reached.
* An error with **OH_AudioEncoder_IsValid** set to **true** (indicating that the execution can continue) occurs.
```c++
// Refresh the encoder.
ret = OH_AudioEncoder_Flush(audioEnc);
if (ret != AV_ERR_OK) {
// Exception handling.
}
// Start encoding again.
ret = OH_AudioEncoder_Start(audioEnc);
if (ret != AV_ERR_OK) {
// Exception handling.
}
```
9. (Optional) Call **OH_AudioEncoder_Reset()** to reset the encoder.
After **OH_AudioEncoder_Reset()** is called, the encoder returns to the initialized state. To continue encoding, you must call **OH_AudioEncoder_Configure()** and then **OH_AudioEncoder_Start()**.
```c++
// Reset the encoder.
ret = OH_AudioEncoder_Reset(audioEnc);
if (ret != AV_ERR_OK) {
// Exception handling.
}
// Reconfigure the encoder.
ret = OH_AudioEncoder_Configure(audioEnc, format);
if (ret != AV_ERR_OK) {
// Exception handling.
}
```
10. Call **OH_AudioEncoder_Stop()** to stop the encoder.
```c++
// Stop the encoder.
ret = OH_AudioEncoder_Stop(audioEnc);
if (ret != AV_ERR_OK) {
// Exception handling.
}
return ret;
```
11. Call **OH_AudioEncoder_Destroy()** to destroy the encoder instance and release resources.
**NOTE**: You only need to call this API once.
```c++
// Call OH_AudioEncoder_Destroy to destroy the encoder.
ret = OH_AudioEncoder_Destroy(audioEnc);
if (ret != AV_ERR_OK) {
// Exception handling.
} else {
audioEnc = NULL; // The encoder cannot be destroyed repeatedly.
}
return ret;
```
# Audio and Video Decapsulation
You can call the native APIs provided by the **AVDemuxer** module to decapsulate audio and video, that is, to extract audio and video frame data from bit stream data.
Currently, two data input types are supported: remote connection (over HTTP) and File Descriptor (FD).
The following decapsulation formats are supported:
| Media Format | Encapsulation Format |
| -------- | :----------------------------|
| Video | MP4, MPEG TS |
| Audio | M4A, AAC, MP3, OGG, FLAC, WAV|
**Usage Scenario**
- Audio and video playback
Decapsulate audio and video streams, decode the frame data obtained through decapsulation, and play the decoded data.
- Audio and video editing
Decapsulate audio and video streams, and edit the specified frames.
- Media file format conversion
Decapsulate audio and video streams, and encapsulate them into a new file format.
## How to Develop
Read [AVDemuxer](../reference/native-apis/_a_v_demuxer.md) and [AVSource](../reference/native-apis/_a_v_source.md) for the API reference.
> **NOTE**
>
> - To call the decapsulation APIs to parse a network playback path, request the **ohos.permission.INTERNET** permission by following the instructions provided in [Applying for Permissions](../security/accesstoken-guidelines.md).
> - To call the decapsulation APIs to parse a local file, request the **ohos.permission.READ_MEDIA** permission by following the instructions provided in [Applying for Permissions](../security/accesstoken-guidelines.md).
> - You can also use **ResourceManager.getRawFd** to obtain the FD of a file packed in the HAP file. For details, see [ResourceManager API Reference](../reference/apis/js-apis-resource-manager.md#getrawfd9).
1. Create a demuxer instance.
``` c++
// Create the FD. You must have the read permission on the file handle when opening the file.
std::string fileName = "test.mp4";
int fd = open(fileName.c_str(), O_RDONLY);
struct stat fileStatus {};
size_t fileSize = 0;
if (stat(fileName.c_str(), &fileStatus) == 0) {
fileSize = static_cast<size_t>(fileStatus.st_size);
} else {
printf("get stat failed");
return;
}
// Create a source resource object for the FD resource file. If offset is not the start position of the file or size is not the actual file size, the data obtained may be incomplete. Consequently, the source resource object may fail to create or subsequent decapsulation may fail.
OH_AVSource *source = OH_AVSource_CreateWithFD(fd, 0, fileSize);
if (source == nullptr) {
printf("create source failed");
return;
}
// (Optional) Create a source resource object for the URI resource file.
// OH_AVSource *source = OH_AVSource_CreateWithURI(uri);
```
```c++
// Create a demuxer for the resource object.
OH_AVDemuxer *demuxer = OH_AVDemuxer_CreateWithSource(source);
if (demuxer == nullptr) {
printf("create demuxer failed");
return;
}
```
2. (Optional) Obtain the number of tracks. If you know the track information, skip this step.
``` c++
// Obtain the number of tracks from the file source information.
OH_AVFormat *sourceFormat = OH_AVSource_GetSourceFormat(source);
if (sourceFormat == nullptr) {
printf("get source format failed");
return;
}
int32_t trackCount = 0;
OH_AVFormat_GetIntValue(sourceFormat, OH_MD_KEY_TRACK_COUNT, &trackCount);
OH_AVFormat_Destroy(sourceFormat);
```
3. (Optional) Obtain the track index and format. If you know the track information, skip this step.
``` c++
uint32_t audioTrackIndex = 0;
uint32_t videoTrackIndex = 0;
int32_t w = 0;
int32_t h = 0;
int32_t trackType;
for (uint32_t index = 0; index < (static_cast<uint32_t>(trackCount)); index++) {
// Obtain the track format.
OH_AVFormat *trackFormat = OH_AVSource_GetTrackFormat(source, index);
if (trackFormat == nullptr) {
printf("get track format failed");
return;
}
OH_AVFormat_GetIntValue(trackFormat, OH_MD_KEY_TRACK_TYPE, &trackType);
static_cast<OH_MediaType>(trackType) == OH_MediaType::MEDIA_TYPE_AUD ? audioTrackIndex = index : videoTrackIndex = index;
// Obtain the width and height of the video track.
if (trackType == OH_MediaType::MEDIA_TYPE_VID) {
OH_AVFormat_GetIntValue(trackFormat, OH_MD_KEY_WIDTH, &w);
OH_AVFormat_GetIntValue(trackFormat, OH_MD_KEY_HEIGHT, &h);
}
OH_AVFormat_Destroy(trackFormat);
}
```
4. Select a track, from which the demuxer reads data.
``` c++
if(OH_AVDemuxer_SelectTrackByID(demuxer, audioTrackIndex) != AV_ERR_OK){
printf("select audio track failed: %d", audioTrackIndex);
return;
}
if(OH_AVDemuxer_SelectTrackByID(demuxer, videoTrackIndex) != AV_ERR_OK){
printf("select video track failed: %d", videoTrackIndex);
return;
}
// (Optional) Deselect the track.
// OH_AVDemuxer_UnselectTrackByID(demuxer, audioTrackIndex);
```
5. (Optional) Seek to the specified time for the selected track.
``` c++
// Decapsulation is performed from this time.
// Note:
// 1. If OH_AVDemuxer_SeekToTime is called for an MPEG TS file, the target position may be a non-key frame. You can then call OH_AVDemuxer_ReadSample to check whether the current frame is a key frame based on the obtained OH_AVCodecBufferAttr. If it is a non-key frame, which causes display issues on the application side, cyclically read the frames until you reach the first key frame, where you can perform processing such as decoding.
// 2. If OH_AVDemuxer_SeekToTime is called for an OGG file, the file seeks to the start of the time interval (second) where the input parameter millisecond is located, which may cause a certain number of frame errors.
OH_AVDemuxer_SeekToTime(demuxer, 0, OH_AVSeekMode::SEEK_MODE_CLOSEST_SYNC);
```
6. Start decapsulation and cyclically obtain frame data. The code snippet below uses a file that contains audio and video tracks as an example.
``` c++
// Create a buffer to store the data obtained after decapsulation.
OH_AVMemory *buffer = OH_AVMemory_Create(w * h * 3 >> 1);
if (buffer == nullptr) {
printf("build buffer failed");
return;
}
OH_AVCodecBufferAttr info;
bool videoIsEnd = false;
bool audioIsEnd = false;
int32_t ret;
while (!audioIsEnd || !videoIsEnd) {
// Before calling OH_AVDemuxer_ReadSample, call OH_AVDemuxer_SelectTrackByID to select the track from which the demuxer reads data.
// Obtain audio frame data.
if(!audioIsEnd) {
ret = OH_AVDemuxer_ReadSample(demuxer, audioTrackIndex, buffer, &info);
if (ret == AV_ERR_OK) {
// Obtain the process the audio frame data in the buffer.
printf("audio info.size: %d\n", info.size);
if (info.flags == OH_AVCodecBufferFlags::AVCODEC_BUFFER_FLAGS_EOS) {
audioIsEnd = true;
}
}
}
if(!videoIsEnd) {
ret = OH_AVDemuxer_ReadSample(demuxer, videoTrackIndex, buffer, &info);
if (ret == AV_ERR_OK) {
// Obtain the process the video frame data in the buffer.
printf("video info.size: %d\n", info.size);
if (info.flags == OH_AVCodecBufferFlags::AVCODEC_BUFFER_FLAGS_EOS) {
videoIsEnd = true;
}
}
}
}
OH_AVMemory_Destroy(buffer);
```
7. Destroy the demuxer instance.
``` c++
// Manually set the instance to NULL after OH_AVSource_Destroy is called. Do not call this API repeatedly for the same instance; otherwise, a program error occurs.
if (OH_AVSource_Destroy(source) != AV_ERR_OK) {
printf("destroy source pointer error");
}
source = NULL;
// Manually set the instance to NULL after OH_AVDemuxer_Destroy is called. Do not call this API repeatedly for the same instance; otherwise, a program error occurs.
if (OH_AVDemuxer_Destroy(demuxer) != AV_ERR_OK) {
printf("destroy demuxer pointer error");
}
demuxer = NULL;
close(fd);
```
# Audio and Video Encapsulation
You can call the native APIs provided by the **AVMuxer** module to encapsulate audio and video, that is, to store encoded audio and video data to a file in a certain format.
Currently, the following encapsulation capabilities are supported:
| Encapsulation Format| Video Codec Type | Audio Codec Type | Cover Type |
| -------- | --------------------- | ---------------- | -------------- |
| mp4 | MPEG-4, AVC (H.264)| AAC, MPEG (MP3)| jpeg, png, bmp|
| m4a | MPEG-4, AVC (H.264)| AAC | jpeg, png, bmp|
**Usage Scenario**
- Video and audio recording
After you encode audio and video streams, encapsulate them into files.
- Audio and video editing
After you edit audio and video, encapsulate them into files.
- Audio and video transcoding
After transcode audio and video, encapsulate them into files.
## How to Develop
Read [AVMuxer](../reference/native-apis/_a_v_muxer.md) for the API reference.
> **NOTE**
>
> To call the encapsulation APIs to write a local file, request the **ohos.permission.READ_MEDIA** and **ohos.permission.WRITE_MEDIA** permissions by following the instructions provided in [Applying for Permissions](../security/accesstoken-guidelines.md).
The following walks you through how to implement the entire process of audio and video encapsulation. It uses the MP4 format as an example.
1. Call **OH_AVMuxer_Create()** to create an **OH_AVMuxer** instance.
``` c++
// Set the encapsulation format to MP4.
OH_AVOutputFormat format = AV_OUTPUT_FORMAT_MPEG_4;
// Create a File Descriptor (FD) in read/write mode.
int32_t fd = open("test.mp4", O_CREAT | O_RDWR | O_TRUNC, S_IRUSR | S_IWUSR);
OH_AVMuxer *muxer = OH_AVMuxer_Create(fd, format);
```
2. (Optional) Call **OH_AVMuxer_SetRotation()** to set the rotation angle.
``` c++
// Set the rotation angle when a video image needs to be rotated.
OH_AVMuxer_SetRotation(muxer, 0);
```
3. Add an audio track.
**Method 1: Use OH_AVFormat_Create to create the format.**
``` c++
int audioTrackId = -1;
OH_AVFormat *formatAudio = OH_AVFormat_Create();
OH_AVFormat_SetStringValue(formatAudio, OH_MD_KEY_CODEC_MIME, OH_AVCODEC_MIMETYPE_AUDIO_AAC); // Mandatory.
OH_AVFormat_SetIntValue(formatAudio, OH_MD_KEY_AUD_SAMPLE_RATE, 44100); // Mandatory.
OH_AVFormat_SetIntValue(formatAudio, OH_MD_KEY_AUD_CHANNEL_COUNT, 2); // Mandatory.
int ret = OH_AVMuxer_AddTrack(muxer, &audioTrackId, formatAudio);
if (ret != AV_ERR_OK || audioTrackId < 0) {
// Failure to add the audio track.
}
OH_AVFormat_Destroy (formatAudio); // Destroy the format.
```
**Method 2: Use OH_AVFormat_CreateAudioFormat to create the format.**
``` c++
int audioTrackId = -1;
OH_AVFormat *formatAudio = OH_AVFormat_CreateAudioFormat(OH_AVCODEC_MIMETYPE_AUDIO_AAC, 44100, 2);
int ret = OH_AVMuxer_AddTrack(muxer, &audioTrackId, formatAudio);
if (ret != AV_ERR_OK || audioTrackId < 0) {
// Failure to add the audio track.
}
OH_AVFormat_Destroy (formatAudio); // Destroy the format.
```
4. Add a video track.
**Method 1: Use OH_AVFormat_Create to create the format.**
``` c++
int videoTrackId = -1;
char *buffer = ...; // Encoding configuration data. If there is no configuration data, leave the parameter unspecified.
size_t size =...; // Length of the encoding configuration data. Set this parameter based on project requirements.
OH_AVFormat *formatVideo = OH_AVFormat_Create();
OH_AVFormat_SetStringValue(formatVideo, OH_MD_KEY_CODEC_MIME, OH_AVCODEC_MIMETYPE_VIDEO_MPEG4); // Mandatory.
OH_AVFormat_SetIntValue(formatVideo, OH_MD_KEY_WIDTH, 1280); // Mandatory.
OH_AVFormat_SetIntValue(formatVideo, OH_MD_KEY_HEIGHT, 720); // Mandatory.
OH_AVFormat_SetBuffer(formatVideo, OH_MD_KEY_CODEC_CONFIG, buffer, size); // Optional
int ret = OH_AVMuxer_AddTrack(muxer, &videoTrackId, formatVideo);
if (ret != AV_ERR_OK || videoTrackId < 0) {
// Failure to add the video track.
}
OH_AVFormat_Destroy(formatVideo); // Destroy the format.
```
**Method 2: Use OH_AVFormat_CreateVideoFormat to create the format.**
``` c++
int videoTrackId = -1;
char *buffer = ...; // Encoding configuration data. If there is no configuration data, leave the parameter unspecified.
size_t size =...; // Length of the encoding configuration data. Set this parameter based on project requirements.
OH_AVFormat *formatVideo = OH_AVFormat_CreateVideoFormat(OH_AVCODEC_MIMETYPE_VIDEO_MPEG4, 1280, 720);
OH_AVFormat_SetBuffer(formatVideo, OH_MD_KEY_CODEC_CONFIG, buffer, size); // Optional
]
if (ret != AV_ERR_OK || videoTrackId < 0) {
// Failure to add the video track.
}
OH_AVFormat_Destroy(formatVideo); // Destroy the format.
```
5. Add a cover track.
**Method 1: Use OH_AVFormat_Create to create the format.**
``` c++
int coverTrackId = -1;
OH_AVFormat *formatCover = OH_AVFormat_Create();
OH_AVFormat_SetStringValue(formatCover, OH_MD_KEY_CODEC_MIME, OH_AVCODEC_MIMETYPE_IMAGE_JPG);
OH_AVFormat_SetIntValue(formatCover, OH_MD_KEY_WIDTH, 1280);
OH_AVFormat_SetIntValue(formatCover, OH_MD_KEY_HEIGHT, 720);
int ret = OH_AVMuxer_AddTrack(muxer, &coverTrackId, formatCover);
if (ret != AV_ERR_OK || coverTrackId < 0) {
// Failure to add the cover track.
}
OH_AVFormat_Destroy(formatCover); // Destroy the format.
```
**Method 2: Use OH_AVFormat_CreateVideoFormat to create the format.**
``` c++
int coverTrackId = -1;
OH_AVFormat *formatCover = OH_AVFormat_CreateVideoFormat(OH_AVCODEC_MIMETYPE_IMAGE_JPG, 1280, 720);
int ret = OH_AVMuxer_AddTrack(muxer, &coverTrackId, formatCover);
if (ret != AV_ERR_OK || coverTrackId < 0) {
// Failure to add the cover track.
}
OH_AVFormat_Destroy(formatCover); // Destroy the format.
```
6. Call **OH_AVMuxer_Start()** to start encapsulation.
``` c++
// Call Start() to write the file header. After this API is called, you cannot set media parameters or add tracks.
if (OH_AVMuxer_Start(muxer) != AV_ERR_OK) {
// Exception handling.
}
```
7. Call **OH_AVMuxer_WriteSample()** to write data, including video, audio, and cover data.
``` c++
// Data can be written only after Start() is called.
int size = ...;
OH_AVMemory *sample = OH_AVMemory_Create (size); // Create an AVMemory instance.
// Write data to the sample buffer. For details, see the usage of OH_AVMemory.
// Encapsulate the cover. One image must be written at a time.
// Set buffer information.
OH_AVCodecBufferAttr info;
info.pts =...; // Playback start time of the current data, in microseconds.
info.size = size; // Length of the current data.
info.offset = 0; // Offset. Generally, the value is 0.
info.flags |= AVCODEC_BUFFER_FLAGS_SYNC_FRAME; // Flag of the current data. For details, see OH_AVCodecBufferFlags.
int trackId = audioTrackId; // Select the track to be written.
int ret = OH_AVMuxer_WriteSample(muxer, trackId, sample, info);
if (ret != AV_ERR_OK) {
// Exception handling.
}
```
8. Call **OH_AVMuxer_Stop()** to stop encapsulation.
``` c++
// Call Stop() to write the file trailer. After this API is called, you cannot write media data.
if (OH_AVMuxer_Stop(muxer) != AV_ERR_OK) {
// Exception handling.
}
```
9. Call **OH_AVMuxer_Destroy()** to release the instance.
``` c++
if (OH_AVMuxer_Destroy(muxer) != AV_ERR_OK) {
// Exception handling.
}
muxer = NULL;
close(fd); // Close the FD.
```
......@@ -77,6 +77,7 @@ The table below lists the supported audio playback formats.
| Video Format| Mandatory or Not|
| -------- | -------- |
| H.265<sup>10+</sup> | Yes|
| H.264 | Yes|
| MPEG-2 | No|
| MPEG-4 | No|
......@@ -87,10 +88,10 @@ The table below lists the supported playback formats and mainstream resolutions.
| Video Container Format| Description| Resolution|
| -------- | -------- | -------- |
| MP4| Video formats: H.264, MPEG-2, MPEG-4, and H.263<br>Audio formats: AAC and MP3| Mainstream resolutions, such as 4K, 1080p, 720p, 480p, and 270p|
| MKV| Video formats: H.264, MPEG-2, MPEG-4, and H.263<br>Audio formats: AAC and MP3| Mainstream resolutions, such as 4K, 1080p, 720p, 480p, and 270p|
| TS| Video formats: H.264, MPEG-2, and MPEG-4<br>Audio formats: AAC and MP3| Mainstream resolutions, such as 4K, 1080p, 720p, 480p, and 270p|
| WebM| Video format: VP8<br>Audio format: VORBIS| Mainstream resolutions, such as 4K, 1080p, 720p, 480p, and 270p|
| MP4| Video formats: H.265<sup>10+</sup>, H.264, MPEG2, MPEG4, and H.263<br>Audio formats: AAC and MP3| Mainstream resolutions, such as 4K, 1080p, 720p, 480p, and 270p|
| MKV| Video formats: H.265<sup>10+</sup>, H.264, MPEG2, MPEG4, and H.263<br>Audio formats: AAC and MP3| Mainstream resolutions, such as 4K, 1080p, 720p, 480p, and 270p|
| TS| Video formats: H.265<sup>10+</sup>, H.264, MPEG2, and MPEG4<br>Audio formats: AAC and MP3| Mainstream resolutions, such as 4K, 1080p, 720p, 480p, and 270p|
| WebM| Video format: VP8<br>Audio format: VORBIS| Mainstream resolutions, such as 4K, 1080p, 720p, 480p, and 270p|
## AVRecorder
......
......@@ -34,7 +34,7 @@ Read [Camera](../reference/apis/js-apis-camera.md) for the API reference.
console.log('prepare failed and error is ' + err.message);
}
})
let videoSurfaceId = null;
AVRecorder.getInputSurface().then((surfaceId) => {
console.info('getInputSurface success');
......@@ -66,7 +66,7 @@ Read [Camera](../reference/apis/js-apis-camera.md) for the API reference.
videoFrameRate: 30 // Video frame rate.
},
url: 'fd://35',
rotation: 0
rotation: 90 // 90° is the default vertical display angle. You can use other values based on project requirements.
}
// Create an AVRecorder instance.
let avRecorder;
......
......@@ -13,7 +13,7 @@ Read [Image](../reference/apis/js-apis-image.md#imagesource) for APIs related to
```
2. Obtain an image.
- Method 1: Obtain the sandbox path. For details about how to obtain the sandbox path, see [Obtaining the Application Development Path](../application-models/application-context-stage.md#obtaining-the-application-development-path). For details about the application sandbox and how to push files to the application sandbox, see [File Management](../file-management/app-sandbox-directory.md).
- Method 1: Obtain the sandbox path. For details about how to obtain the sandbox path, see [Obtaining Application File Paths](../application-models/application-context-stage.md#obtaining-application-file-paths). For details about the application sandbox and how to push files to the application sandbox, see [File Management](../file-management/app-sandbox-directory.md).
```ts
// Code on the stage model
......@@ -110,6 +110,11 @@ Read [Image](../reference/apis/js-apis-image.md#imagesource) for APIs related to
After the decoding is complete and the pixel map is obtained, you can perform subsequent [image processing](image-transformation.md).
5. Release the **PixelMap** instance.
```ts
pixelMap.release();
```
## Sample Code - Decoding an Image in Resource Files
1. Obtain a resource manager.
......@@ -140,4 +145,7 @@ Read [Image](../reference/apis/js-apis-image.md#imagesource) for APIs related to
const pixelMap = await imageSource.createPixelMap();
```
<!--no_check-->
\ No newline at end of file
5. Release the **PixelMap** instance.
```ts
pixelMap.release();
```
# Obtaining Supported Codecs
Different devices support different codecs. Before invoking or configuring a codec, you need to query the codec specifications supported.
You can call the native APIs provided by the **AVCapability** module to check whether related capabilities are supported.
## How to Develop
Read [AVCapability](../reference/native-apis/_a_v_capability.md) for the API reference.
1. Obtain a codec capability instance.
```c
// Obtain a codec capability instance based on the MIME type and encoder flag.
OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false);
// Obtain a codec capability instance based on the MIME type, encoder flag, and software/hardware type.
OH_AVCapability *capability = OH_AVCodec_GetCapabilityByCategory(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false, SOFTWARE);
```
2. Query the specifications provided by the codec capability instance.
```c
// Check whether the codec capability instance describes a hardware codec.
bool isHardware = OH_AVCapability_IsHardware(capability);
// Obtain the codec name of the codec capability instance.
const char *codecName = OH_AVCapability_GetName(capability);
// Obtain the maximum number of instances supported by the codec capability instance.
int32_t maxSupportedInstances = OH_AVCapability_GetMaxSupportedInstances(capability);
// Obtain the bit rate range supported by the encoder.
OH_AVRange bitrateRange;
int32_t ret = OH_AVCapability_GetEncoderBitrateRange(capability, &bitrateRange);
if (ret != AV_ERR_OK) {
// Exception processing.
}
// Check whether the codec capability instance supports a specific bit rate mode.
bool isEncoderBitrateModeSupported = OH_AVCapability_IsEncoderBitrateModeSupported(capability, &bitrateMode);
// Obtain the quality range supported by the encoder.
OH_AVRange qualityRange;
int32_t ret = OH_AVCapability_GetEncoderQualityRange(capability, &qualityRange);
if (ret != AV_ERR_OK) {
// Exception processing.
}
// Obtain the complexity range supported by the encoder.
OH_AVRange complexityRange;
int32_t ret = OH_AVCapability_GetEncoderComplexityRange(capability, &complexityRange);
if (ret != AV_ERR_OK) {
// Exception processing.
}
// Obtain the supported audio sampling rates.
const int32_t *sampleRates;
uint32_t sampleRateNum = 0;
int32_t ret = OH_AVCapability_GetAudioSupportedSampleRates(capability, &sampleRates, &sampleRateNum);
if (ret != AV_ERR_OK) {
// Exception processing.
}
// Obtain the number of audio channels supported.
OH_AVRange channelCountRange;
int32_t ret = OH_AVCapability_GetAudioChannelCountRange(capability, &channelCountRange);
if (ret != AV_ERR_OK) {
// Exception processing.
}
// Obtain the width alignment value supported.
int32_t widthAlignment;
int32_t ret = OH_AVCapability_GetVideoWidthAlignment(capability, &widthAlignment);
if (ret != AV_ERR_OK) {
// Exception processing.
}
// Obtain the high alignment value supported.
int32_t heightAlignment;
int32_t ret = OH_AVCapability_GetVideoHeightAlignment(capability, &heightAlignment);
if (ret != AV_ERR_OK) {
// Exception processing.
}
// Obtain the width range when the height is 1080.
OH_AVRange widthRange;
int32_t ret = OH_AVCapability_GetVideoWidthRangeForHeight(capability, 1080, &widthRange);
if (ret != AV_ERR_OK) {
// Exception processing.
}
// Obtain the height range when the width is 1920.
OH_AVRange heightRange;
int32_t ret = OH_AVCapability_GetVideoHeightRangeForWidth(capability, 1920, &heightRange);
if (ret != AV_ERR_OK) {
// Exception processing.
}
// Obtain the width range supported.
OH_AVRange widthRange;
int32_t ret = OH_AVCapability_GetVideoWidthRange(capability, &widthRange);
if (ret != AV_ERR_OK) {
// Exception processing.
}
// Obtain the height range supported.
OH_AVRange heightRange;
int32_t ret = OH_AVCapability_GetVideoWidthRange(capability, &heightRange);
if (ret != AV_ERR_OK) {
// Exception processing.
}
// Check whether the codec capability instance supports the 1080p resolution.
bool isVideoSizeSupported = OH_AVCapability_IsVideoSizeSupported(capability, 1920, 1080);
// Obtain the video frame rate range supported.
OH_AVRange frameRateRange;
int32_t ret = OH_AVCapability_GetVideoFrameRateRange(capability, &frameRateRange);
if (ret != AV_ERR_OK) {
// Exception processing.
}
// Obtain the video frame rate range when the resolution is 1920 x 1080.
OH_AVRange frameRateRange;
int32_t ret = OH_AVCapability_GetVideoFrameRateRangeForSize(capability, 1920, 1080, &frameRateRange);
if (ret != AV_ERR_OK) {
// Exception processing.
}
// Check whether the codec capability instance supports the scenario where the resolution is 1080p and the frame rate is 30 fps.
bool areVideoSizeAndFrameRateSupported = OH_AVCapability_AreVideoSizeAndFrameRateSupported(capability, 1920, 1080, 30);
// Obtain the supported color formats and the number of supported color formats.
const int32_t *pixFormats;
uint32_t pixFormatNum = 0;
int32_t ret = OH_AVCapability_GetVideoSupportedPixelFormats(capability, &pixFormats, &pixFormatNum);
if (ret != AV_ERR_OK) {
// Exception processing.
}
// Obtain the profiles supported.
const int32_t *profiles;
uint32_t profileNum = 0;
int32_t ret = OH_AVCapability_GetSupportedProfiles(capability, &profiles, &profileNum);
if (ret != AV_ERR_OK) {
// Exception processing.
}
// Obtain the level range of a specific profile.
const int32_t *levels;
uint32_t levelNum = 0;
int32_t ret = OH_AVCapability_GetSupportedLevelsForProfile(capability, 0, &levels, &levelNum);
// Check whether the codec capability instance supports the scenario where the resolution is 1080p and the frame rate is 30 fps.
bool areVideoSizeAndFrameRateSupported = OH_AVCapability_AreVideoSizeAndFrameRateSupported(capability, 1920, 1080, 30);
```
# Using OHAudio for Audio Playback
**OHAudio** is a set of native APIs introduced in API version 10. These APIs are normalized in design and support both common and low-latency audio channels.
## Prerequisites
To use the playback or recording capability of **OHAudio**, you must first import the corresponding header files.
Specifically, to use APIs for audio playback, import <[native_audiostreambuilder.h](../reference/native-apis/native__audiostreambuilder_8h.md)> and <[native_audiorenderer.h](../reference/native-apis/native__audiorenderer_8h.md)>.
## Audio Stream Builder
**OHAudio** provides the **OH_AudioStreamBuilder** class, which complies with the builder design pattern and is used to build audio streams. You need to specify [OH_AudioStream_Type](../reference/native-apis/_o_h_audio.md#oh_audiostream_type) based on your service scenarios.
**OH_AudioStream_Type** can be set to either of the following:
- AUDIOSTREAM_TYPE_RENDERER
- AUDIOSTREAM_TYPE_CAPTURER
The following code snippet shows how to use [OH_AudioStreamBuilder_Create](../reference/native-apis/_o_h_audio.md#oh_audiostreambuilder_create) to create a builder:
```
OH_AudioStreamBuilder* builder;
OH_AudioStreamBuilder_Create(&builder, streamType);
```
After the audio service is complete, call [OH_AudioStreamBuilder_Destroy](../reference/native-apis/_o_h_audio.md#oh_audiostreambuilder_destroy) to destroy the builder.
```
OH_AudioStreamBuilder_Destroy(builder);
```
## How to Develop
Read [OHAudio](../reference/native-apis/_o_h_audio.md) for the API reference.
The following walks you through how to implement simple playback:
1. Create an audio stream builder.
```c++
OH_AudioStreamBuilder* builder;
OH_AudioStreamBuilder_Create(&builder, AUDIOSTREAM_TYPE_RENDERER);
```
2. Set audio stream parameters.
After creating the builder for audio playback, set the parameters required.
```c++
OH_AudioStreamBuilder_SetSamplingRate(builder, rate);
OH_AudioStreamBuilder_SetChannelCount(builder, channelCount);
OH_AudioStreamBuilder_SetSampleFormat(builder, format);
OH_AudioStreamBuilder_SetEncodingType(builder, encodingType);
OH_AudioStreamBuilder_SetRendererInfo(builder, usage);
```
Note that the audio data to play is written through callbacks. You must call **OH_AudioStreamBuilder_SetRendererCallback** to implement the callbacks. For details about the declaration of the callback functions, see [OH_AudioRenderer_Callbacks](../reference/native-apis/_o_h_audio.md#oh_audiorenderer_callbacks).
3. Set the callback functions.
```c++
OH_AudioStreamBuilder_SetRendererCallback(builder, callbacks, nullptr);
```
4. Create an audio renderer instance.
```c++
OH_AudioRenderer* audioRenderer;
OH_AudioStreamBuilder_GenerateRenderer(builder, &audioRenderer);
```
5. Use the audio renderer.
You can use the APIs listed below to control the audio streams.
| API | Description |
| ------------------------------------------------------------ | ------------ |
| OH_AudioStream_Result OH_AudioRenderer_Start(OH_AudioRenderer* renderer) | Starts the audio renderer. |
| OH_AudioStream_Result OH_AudioRenderer_Pause(OH_AudioRenderer* renderer) | Pauses the audio renderer. |
| OH_AudioStream_Result OH_AudioRenderer_Stop(OH_AudioRenderer* renderer) | Stops the audio renderer. |
| OH_AudioStream_Result OH_AudioRenderer_Flush(OH_AudioRenderer* renderer) | Flushes written audio data.|
| OH_AudioStream_Result OH_AudioRenderer_Release(OH_AudioRenderer* renderer) | Releases the audio renderer instance.|
6. Destroy the audio stream builder.
When the builder is no longer used, release related resources.
```c++
OH_AudioStreamBuilder_Destroy(builder);
```
# Using OHAudio for Audio Recording
**OHAudio** is a set of native APIs introduced in API version 10. These APIs are normalized in design and support both common and low-latency audio channels.
## Prerequisites
To use the playback or recording capability of **OHAudio**, you must first import the corresponding header files.
To use APIs for audio recording, import <[native_audiostreambuilder.h](../reference/native-apis/native__audiostreambuilder_8h.md)> and <[native_audiocapturer.h](../reference/native-apis/native__audiocapturer_8h.md)>.
## Audio Stream Builder
**OHAudio** provides the **OH_AudioStreamBuilder** class, which complies with the builder design pattern and is used to build audio streams. You need to specify [OH_AudioStream_Type](../reference/native-apis/_o_h_audio.md#oh_audiostream_type) based on your service scenarios.
**OH_AudioStream_Type** can be set to either of the following:
- AUDIOSTREAM_TYPE_RENDERER
- AUDIOSTREAM_TYPE_CAPTURER
The following code snippet shows how to use [OH_AudioStreamBuilder_Create](../reference/native-apis/_o_h_audio.md#oh_audiostreambuilder_create) to create a builder:
```
OH_AudioStreamBuilder* builder;
OH_AudioStreamBuilder_Create(&builder, streamType);
```
After the audio service is complete, call [OH_AudioStreamBuilder_Destroy](../reference/native-apis/_o_h_audio.md#oh_audiostreambuilder_destroy) to destroy the builder.
```
OH_AudioStreamBuilder_Destroy(builder);
```
## How to Develop
Read [OHAudio](../reference/native-apis/_o_h_audio.md) for the API reference.
The following walks you through how to implement simple recording:
1. Create an audio stream builder.
```c++
OH_AudioStreamBuilder* builder;
OH_AudioStreamBuilder_Create(&builder, AUDIOSTREAM_TYPE_CAPTURER);
```
2. Set audio stream parameters.
After creating the builder for audio recording, set the parameters required.
```c++
OH_AudioStreamBuilder_SetSamplingRate(builder, rate);
OH_AudioStreamBuilder_SetChannelCount(builder, channelCount);
OH_AudioStreamBuilder_SetSampleFormat(builder, format);
OH_AudioStreamBuilder_SetEncodingType(builder, encodingType);
OH_AudioStreamBuilder_SetCapturerInfo(builder, sourceType);
```
Note that the audio data to record is written through callbacks. You must call **OH_AudioStreamBuilder_SetCapturerCallback** to implement the callbacks. For details about the declaration of the callback functions, see [OH_AudioCapturer_Callbacks](../reference/native-apis/_o_h_audio.md#oh_audiocapturer_callbacks).
3. Set the callback functions.
```c++
OH_AudioStreamBuilder_SetCapturerCallback(builder, callbacks, nullptr);
```
4. Create an audio capturer instance.
```c++
OH_AudioCapturer* audioCapturer;
OH_AudioStreamBuilder_GenerateCapturer(builder, &audioCapturer);
```
5. Use the audio capturer.
You can use the APIs listed below to control the audio streams.
| API | Description |
| ------------------------------------------------------------ | ------------ |
| OH_AudioStream_Result OH_AudioCapturer_Start(OH_AudioCapturer* capturer) | Starts the audio capturer. |
| OH_AudioStream_Result OH_AudioCapturer_Pause(OH_AudioCapturer* capturer) | Pauses the audio capturer. |
| OH_AudioStream_Result OH_AudioCapturer_Stop(OH_AudioCapturer* capturer) | Stops the audio capturer. |
| OH_AudioStream_Result OH_AudioCapturer_Flush(OH_AudioCapturer* capturer) | Flushes obtained audio data.|
| OH_AudioStream_Result OH_AudioCapturer_Release(OH_AudioCapturer* capturer) | Releases the audio capturer instance.|
6. Destroy the audio stream builder.
When the builder is no longer used, release related resources.
```c++
OH_AudioStreamBuilder_Destroy(builder);
```
# Video Decoding
You can call the native APIs provided by the **VideoDecoder** module to decode video, that is, to decode media data into a YUV file or send it for display.
Currently, the following decoding capabilities are supported:
| Container Specification| Video Hardware Decoding Type | Video Software Decoding Type |
| -------- | --------------------- | ---------------- |
| mp4 | AVC (H.264), HEVC (H.265)|AVC (H.264) |
Video software decoding and hardware decoding are different. When a decoder is created based on the MIME type, only H.264 (video/avc) is supported for software decoding, and H.264 (video/avc) and H.265 (video/hevc) are supported for hardware decoding.
## How to Develop
Read [VideoDecoder](../reference/native-apis/_video_decoder.md) for the API reference.
1. Create a decoder instance.
You can create a decoder by name or MIME type.
``` c++
// Create a decoder by name.
OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false);
const char *name = OH_AVCapability_GetName(capability);
OH_AVCodec *videoDec = OH_VideoDecoder_CreateByName(name); // name:"OH.Media.Codec.Decoder.Video.AVC"
```
```c++
// Create a decoder by MIME type.
// Create an H.264 decoder for software/hardware decoding.
OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_AVC);
// Create a decoder by MIME type.
// Create an H.265 decoder for hardware decoding.
OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_HEVC);
```
``` c++
// Initialize the queues.
class VDecSignal {
public:
std::mutex inMutex_;
std::mutex outMutex_;
std::condition_variable inCond_;
std::condition_variable outCond_;
std::queue<uint32_t> inQueue_;
std::queue<uint32_t> outQueue_;
std::queue<OH_AVMemory *> inBufferQueue_;
std::queue<OH_AVMemory *> outBufferQueue_;
std::queue<OH_AVCodecBufferAttr> attrQueue_;
};
VDecSignal *signal_;
```
2. Call **OH_VideoDecoder_SetCallback()** to set callback functions.
Register the **OH_AVCodecAsyncCallback** struct that defines the following callback function pointers:
- **OnError**, a callback used to report a codec operation error
- **OnOutputFormatChanged**, a callback used to report a codec stream change, for example, stream width or height change.
- **OnInputBufferAvailable**, a callback used to report input data required, which means that the decoder is ready for receiving data
- **OnOutputBufferAvailable**, a callback used to report output data generated, which means that decoding is complete (Note: The **data** parameter in the callback function is empty in surface output mode.)
You need to process the callback functions to ensure that the decoder runs properly.
``` c++
// Set the OnError callback function.
static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData)
{
(void)codec;
(void)errorCode;
(void)userData;
}
// Set the OnOutputFormatChanged callback function.
static void OnOutputFormatChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData)
{
(void)codec;
(void)format;
(void)userData;
}
// Set the OnInputBufferAvailable callback function, which is used to obtain the input frame information.
static void OnInputBufferAvailable(OH_AVCodec *codec, uint32_t index, OH_AVMemory *data, void *userData)
{
(void)codec;
VDecSignal *signal_ = static_cast<VDecSignal *>(userData);
unique_lock<mutex> lock(signal_->inMutex_);
// The ID of the input frame is sent to inQueue_.
signal_->inQueue_.push(index);
// The input frame data is sent to inBufferQueue_.
signal_->inBufferQueue_.push(data);
signal_->inCond_.notify_all();
}
// Set the OnOutputBufferAvailable callback function, which is used to obtain the output frame information.
static void OnOutputBufferAvailable(OH_AVCodec *codec, uint32_t index, OH_AVMemory *data, OH_AVCodecBufferAttr *attr,
void *userData)
{
(void)codec;
VDecSignal *signal_ = static_cast<VDecSignal *>(userData);
unique_lock<mutex> lock(signal_->outMutex_);
// The index of the output buffer is sent to outQueue_.
signal_->outQueue_.push(index);
// The decoded data (specified by data) is sent to outBufferQueue_. (Note: data is empty in surface output mode.)
signal_->outBufferQueue_.push(data);
signal_->attrQueue_.push(*attr);
signal_->outCond_.notify_all();
}
OH_AVCodecAsyncCallback cb = {&OnError, &OnOutputFormatChanged, &OnInputBufferAvailable, &OnOutputBufferAvailable};
// Set the asynchronous callbacks.
int32_t ret = OH_VideoDecoder_SetCallback(videoDec, cb, signal_);
```
3. Call **OH_VideoDecoder_Configure()** to configure the decoder.
The following options are mandatory: video frame width, video frame height, and video color format.
``` c++
// (Mandatory) Configure the video frame width.
constexpr uint32_t DEFAULT_WIDTH = 320;
// (Mandatory) Configure the video frame height.
constexpr uint32_t DEFAULT_HEIGHT = 240;
OH_AVFormat *format = OH_AVFormat_Create();
// Set the format.
OH_AVFormat_SetIntValue(format, OH_MD_KEY_WIDTH, DEFAULT_WIDTH);
OH_AVFormat_SetIntValue(format, OH_MD_KEY_HEIGHT, DEFAULT_HEIGHT);
OH_AVFormat_SetIntValue(format, OH_MD_KEY_PIXEL_FORMAT, AV_PIXEL_FORMAT_NV21);
// Configure the decoder.
int32_t ret = OH_VideoDecoder_Configure(videoDec, format);
OH_AVFormat_Destroy(format);
```
4. (Optional) Set the surface.
This step is required only when the surface is used to send the data for display.
``` c++
// Set the parameters of the display window.
sptr<Rosen::Window> window = nullptr;
sptr<Rosen::WindowOption> option = new Rosen::WindowOption();
option->SetWindowRect({0, 0, DEFAULT_WIDTH, DEFAULT_HEIGHT});
option->SetWindowType(Rosen::WindowType::WINDOW_TYPE_APP_LAUNCHING);
option->SetWindowMode(Rosen::WindowMode::WINDOW_MODE_FLOATING);
window = Rosen::Window::Create("video-decoding", option);
window->Show();
sptr<Surface> ps = window->GetSurfaceNode()->GetSurface();
OHNativeWindow *nativeWindow = CreateNativeWindowFromSurface(&ps);
int32_t ret = OH_VideoDecoder_SetSurface(videoDec, window);
bool isSurfaceMode = true;
```
5. (Optional) Configure the surface parameters of the decoder. This step is required only when the surface is used.
``` c++
OH_AVFormat *format = OH_AVFormat_Create();
// Configure the display rotation angle.
OH_AVFormat_SetIntValue(format, OH_MD_KEY_ROTATION, 90);
// Configure the matching mode (scaling or cropping) between the video and the display screen.
OH_AVFormat_SetIntValue(format, OH_MD_KEY_SCALING_MODE, SCALING_MODE_SCALE_CROP);
int32_t ret = OH_VideoDecoder_SetParameter(videoDec, format);
OH_AVFormat_Destroy(format);
```
6. Call **OH_VideoDecoder_Start()** to start the decoder.
``` c++
string_view outputFilePath = "/*yourpath*.yuv";
std::unique_ptr<std::ifstream> inputFile = std::make_unique<std::ifstream>();
// Open the path of the binary file to be decoded.
inputFile->open(inputFilePath.data(), std::ios::in | std::ios::binary);
// Configure the parameter in buffer mode.
if(!isSurfaceMode) {
// Configure the output file path in buffer mode.
std::unique_ptr<std::ofstream> outFile = std::make_unique<std::ofstream>();
outFile->open(outputFilePath.data(), std::ios::out | std::ios::binary);
}
// Start decoding.
int32_t ret = OH_VideoDecoder_Start(videoDec);
```
7. Call **OH_VideoDecoder_PushInputData()** to push the stream to the input queue for decoding.
``` c++
// Configure the buffer information.
OH_AVCodecBufferAttr info;
// Call av_packet_alloc to initialize and return a container packet.
AVPacket pkt = av_packet_alloc();
// Configure the input size, offset, and timestamp of the buffer.
info.size = pkt->size;
info.offset = 0;
info.pts = pkt->pts;
info.flags = AVCODEC_BUFFER_FLAGS_NONE;
// Send the data to the input queue for decoding. The index is the subscript of the queue.
int32_t ret = OH_VideoDecoder_PushInputData(videoDec, index, info);
```
8. Call **OH_VideoDecoder_FreeOutputData()** to output the decoded frames.
``` c++
int32_t ret;
// Write the decoded data (specified by data) to the output file.
outFile->write(reinterpret_cast<char *>(OH_AVMemory_GetAddr(data)), data.size);
// Free the buffer that stores the output data. The index is the subscript of the surface/buffer queue.
if (isSurfaceMode) {
ret = OH_VideoDecoder_RenderOutputData(videoDec, index);
} else {
ret = OH_VideoDecoder_FreeOutputData(videoDec, index);
}
if (ret != AV_ERR_OK) {
// Exception handling.
}
```
9. (Optional) Call **OH_VideoDecoder_Flush()** to refresh the decoder.
After **OH_VideoDecoder_Flush()** is called, the decoder remains in the running state, but the current queue is cleared and the buffer storing the decoded data is freed.
To continue decoding, you must call **OH_VideoDecoder_Start()** again.
``` c++
int32_t ret;
// Refresh the decoder.
ret = OH_VideoDecoder_Flush(videoDec);
if (ret != AV_ERR_OK) {
// Exception handling.
}
// Start decoding again.
ret = OH_VideoDecoder_Start(videoDec);
```
10. (Optional) Call **OH_VideoDecoder_Reset()** to reset the decoder.
After **OH_VideoDecoder_Reset()** is called, the decoder returns to the initialized state. To continue decoding, you must call **OH_VideoDecoder_Configure()** and then **OH_VideoDecoder_Start()**.
``` c++
int32_t ret;
// Reset the decoder.
ret = OH_VideoDecoder_Reset(videoDec);
if (ret != AV_ERR_OK) {
// Exception handling.
}
// Reconfigure the decoder.
ret = OH_VideoDecoder_Configure(videoDec, format);
```
11. Call **OH_VideoDecoder_Stop()** to stop the decoder.
``` c++
int32_t ret;
// Stop the decoder.
ret = OH_VideoDecoder_Stop(videoDec);
if (ret != AV_ERR_OK) {
// Exception handling.
}
return AV_ERR_OK;
```
12. Call **OH_VideoDecoder_Destroy()** to destroy the decoder instance and release resources.
``` c++
int32_t ret;
// Call OH_VideoDecoder_Destroy to destroy the decoder.
ret = OH_VideoDecoder_Destroy(videoDec);
if (ret != AV_ERR_OK) {
// Exception handling.
}
return AV_ERR_OK;
```
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册