diff --git a/en/application-dev/media/Readme-EN.md b/en/application-dev/media/Readme-EN.md index f6902595cadbea27765ebf1812544821b3c68a09..f3a233ca129527db112459ab5110df49b8e1052d 100755 --- a/en/application-dev/media/Readme-EN.md +++ b/en/application-dev/media/Readme-EN.md @@ -1,29 +1,60 @@ # Media +- [Media Application Overview](media-application-overview.md) - Audio and Video - - [Audio Overview](audio-overview.md) - - [Audio Rendering Development](audio-renderer.md) - - [Audio Stream Management Development](audio-stream-manager.md) - - [Audio Capture Development](audio-capturer.md) - - [OpenSL ES Audio Playback Development](opensles-playback.md) - - [OpenSL ES Audio Recording Development](opensles-capture.md) - - [Audio Interruption Mode Development](audio-interruptmode.md) - - [Volume Management Development](audio-volume-manager.md) - - [Audio Routing and Device Management Development](audio-routing-manager.md) - - [AVPlayer Development (Recommended)](avplayer-playback.md) - - [AVRecorder Development (Recommended)](avrecorder.md) - - [Audio Playback Development (To Be Deprecated Soon)](audio-playback.md) - - [Audio Recording Development (To Be Deprecated Soon)](audio-recorder.md) - - [Video Playback Development (To Be Deprecated Soon)](video-playback.md) - - [Video Recording Development (To Be Deprecated Soon)](video-recorder.md) - -- AVSession + - [Audio and Video Overview](av-overview.md) + - [AVPlayer and AVRecorder](avplayer-avrecorder-overview.md) + - Audio Playback + - [Audio Playback Overview](audio-playback-overview.md) + - [Using AVPlayer for Audio Playback](using-avplayer-for-playback.md) + - [Using AudioRenderer for Audio Playback](using-audiorenderer-for-playback.md) + - [Using OpenSL ES for Audio Playback](using-opensl-es-for-playback.md) + - [Using TonePlayer for Audio Playback (for System Applications Only)](using-toneplayer-for-playback.md) + - [Audio Playback Concurrency Policy](audio-playback-concurrency.md) + - [Volume Management](volume-management.md) + - [Audio Playback Stream Management](audio-playback-stream-management.md) + - [Audio Output Device Management](audio-output-device-management.md) + - [Distributed Audio Playback (for System Applications Only)](distributed-audio-playback.md) + - Audio Recording + - [Audio Recording Overview](audio-recording-overview.md) + - [Using AVRecorder for Audio Recording](using-avrecorder-for-recording.md) + - [Using AudioCapturer for Audio Recording](using-audiocapturer-for-recording.md) + - [Using OpenSL ES for Audio Recording](using-opensl-es-for-recording.md) + - [Microphone Management](mic-management.md) + - [Audio Recording Stream Management](audio-recording-stream-management.md) + - [Audio Input Device Management](audio-input-device-management.md) + - Audio Call + - [Audio Call Overview](audio-call-overview.md) + - [Developing Audio Call](audio-call-development.md) + - [Video Playback](video-playback.md) + - [Video Recording](video-recording.md) +- AVSession (for System Applications Only) - [AVSession Overview](avsession-overview.md) - - [AVSession Development](avsession-guidelines.md) - + - Local AVSession + - [Local AVSession Overview](local-avsession-overview.md) + - [AVSession Provider](using-avsession-developer.md) + - [AVSession Controller](using-avsession-controller.md) + - Distributed AVSession + - [Distributed AVSession Overview](distributed-avsession-overview.md) + - [Using Distributed AVSession](using-distributed-avsession.md) +- Camera (for System Applications Only) + - [Camera Overview](camera-overview.md) + - Camera Development + - [Camera Development Preparations](camera-preparation.md) + - [Device Input Management](camera-device-input.md) + - [Session Management](camera-session-management.md) + - [Camera Preview](camera-preview.md) + - [Camera Photographing](camera-shooting.md) + - [Video Recording](camera-recording.md) + - [Camera Metadata](camera-metadata.md) + - Best Practices + - [Camera Photographing Sample](camera-shooting-case.md) + - [Video Recording Sample](camera-recording-case.md) - Image - - [Image Development](image.md) - -- Camera - - [Camera Development](camera.md) - - [Distributed Camera Development](remote-camera.md) + - [Image Overview](image-overview.md) + - [Image Decoding](image-decoding.md) + - Image Processing + - [Image Transformation](image-transformation.md) + - [Pixel Map Operation](image-pixelmap-operation.md) + - [Image Encoding](image-encoding.md) + - [Image Tool](image-tool.md) diff --git a/en/application-dev/media/audio-call-development.md b/en/application-dev/media/audio-call-development.md new file mode 100644 index 0000000000000000000000000000000000000000..8234c837c2ce985c2a1a7dc91c7e0002fb3d4a69 --- /dev/null +++ b/en/application-dev/media/audio-call-development.md @@ -0,0 +1,259 @@ +# Developing Audio Call + +During an audio call, audio output (playing the peer voice) and audio input (recording the local voice) are carried out simultaneously. You can use the AudioRenderer to implement audio output and the AudioCapturer to implement audio input. + +Before starting or stopping using the audio call service, the application needs to check the [audio scene](audio-call-overview.md#audio-scene) and [ringer mode](audio-call-overview.md#ringer-mode) to adopt proper audio management and prompt policies. + +The sample code below demonstrates the basic process of using the AudioRenderer and AudioCapturer to implement the audio call service, without the process of call data transmission. In actual development, the peer call data transmitted over the network needs to be decoded and played, and the sample code uses the process of reading an audio file instead; the local call data needs to be encoded and packed and then sent to the peer over the network, and the sample code uses the process of writing an audio file instead. + +## Using AudioRenderer to Play the Peer Voice + +This process is similar to the process of [using AudioRenderer to develop audio playback](using-audiorenderer-for-playback.md). The key differences lie in the **audioRendererInfo** parameter and audio data source. In the **audioRendererInfo** parameter used for audio calling, **content** must be set to **CONTENT_TYPE_SPEECH**, and **usage** must be set to **STREAM_USAGE_VOICE_COMMUNICATION**. + +```ts +import audio from '@ohos.multimedia.audio'; +import fs from '@ohos.file.fs'; +const TAG = 'VoiceCallDemoForAudioRenderer'; +// The process is similar to the process of using AudioRenderer to develop audio playback. The key differences lie in the audioRendererInfo parameter and audio data source. +export default class VoiceCallDemoForAudioRenderer { + private renderModel = undefined; + private audioStreamInfo = { + samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_48000, // Sampling rate. + channels: audio.AudioChannel.CHANNEL_2, // Channel. + sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, // Sampling format. + encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW // Encoding format. + } + private audioRendererInfo = { + // Parameters corresponding to the call scenario need to be used. + content: audio.ContentType.CONTENT_TYPE_SPEECH, // Audio content type: speech. + usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION, // Audio stream usage type: voice communication. + rendererFlags: 0 // AudioRenderer flag. The default value is 0. + } + private audioRendererOptions = { + streamInfo: this.audioStreamInfo, + rendererInfo: this.audioRendererInfo + } + // Create an AudioRenderer instance, and set the events to listen for. + init() { + audio.createAudioRenderer(this.audioRendererOptions, (err, renderer) => { // Create an AudioRenderer instance. + if (!err) { + console.info(`${TAG}: creating AudioRenderer success`); + this.renderModel = renderer; + this.renderModel.on('stateChange', (state) => { // Set the events to listen for. A callback is invoked when the AudioRenderer is switched to the specified state. + if (state == 1) { + console.info('audio renderer state is: STATE_PREPARED'); + } + if (state == 2) { + console.info('audio renderer state is: STATE_RUNNING'); + } + }); + this.renderModel.on('markReach', 1000, (position) => { // Subscribe to the markReach event. A callback is triggered when the number of rendered frames reaches 1000. + if (position == 1000) { + console.info('ON Triggered successfully'); + } + }); + } else { + console.info(`${TAG}: creating AudioRenderer failed, error: ${err.message}`); + } + }); + } + // Start audio rendering. + async start() { + let stateGroup = [audio.AudioState.STATE_PREPARED, audio.AudioState.STATE_PAUSED, audio.AudioState.STATE_STOPPED]; + if (stateGroup.indexOf(this.renderModel.state) === -1) { // Rendering can be started only when the AudioRenderer is in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state. + console.error(TAG + 'start failed'); + return; + } + await this.renderModel.start(); // Start rendering. + const bufferSize = await this.renderModel.getBufferSize(); + // The process of reading audio file data is used as an example. In actual audio call development, audio data transmitted from the peer needs to be read. + let context = getContext(this); + let path = context.filesDir; + + const filePath = path + '/voice_call_data.wav'; // Sandbox path. The actual path is /data/storage/el2/base/haps/entry/files/voice_call_data.wav. + let file = fs.openSync(filePath, fs.OpenMode.READ_ONLY); + let stat = await fs.stat(filePath); + let buf = new ArrayBuffer(bufferSize); + let len = stat.size % bufferSize === 0 ? Math.floor(stat.size / bufferSize) : Math.floor(stat.size / bufferSize + 1); + for (let i = 0; i < len; i++) { + let options = { + offset: i * bufferSize, + length: bufferSize + }; + let readsize = await fs.read(file.fd, buf, options); + // buf indicates the audio data to be written to the buffer. Before calling AudioRenderer.write(), you can preprocess the audio data for personalized playback. The AudioRenderer reads the audio data written to the buffer for rendering. + let writeSize = await new Promise((resolve, reject) => { + this.renderModel.write(buf, (err, writeSize) => { + if (err) { + reject(err); + } else { + resolve(writeSize); + } + }); + }); + if (this.renderModel.state === audio.AudioState.STATE_RELEASED) { // The rendering stops if the AudioRenderer is in the STATE_RELEASED state. + fs.close(file); + await this.renderModel.stop(); + } + if (this.renderModel.state === audio.AudioState.STATE_RUNNING) { + if (i === len - 1) { // The rendering stops if the file finishes reading. + fs.close(file); + await this.renderModel.stop(); + } + } + } + } + // Pause the rendering. + async pause() { + // Rendering can be paused only when the AudioRenderer is in the STATE_RUNNING state. + if (this.renderModel.state !== audio.AudioState.STATE_RUNNING) { + console.info('Renderer is not running'); + return; + } + await this.renderModel.pause(); // Pause rendering. + if (this.renderModel.state === audio.AudioState.STATE_PAUSED) { + console.info('Renderer is paused.'); + } else { + console.error('Pausing renderer failed.'); + } + } + // Stop rendering. + async stop() { + // Rendering can be stopped only when the AudioRenderer is in the STATE_RUNNING or STATE_PAUSED state. + if (this.renderModel.state !== audio.AudioState.STATE_RUNNING && this.renderModel.state !== audio.AudioState.STATE_PAUSED) { + console.info('Renderer is not running or paused.'); + return; + } + await this.renderModel.stop(); // Stop rendering. + if (this.renderModel.state === audio.AudioState.STATE_STOPPED) { + console.info('Renderer stopped.'); + } else { + console.error('Stopping renderer failed.'); + } + } + // Release the instance. + async release() { + // The AudioRenderer can be released only when it is not in the STATE_RELEASED state. + if (this.renderModel.state === audio.AudioState.STATE_RELEASED) { + console.info('Renderer already released'); + return; + } + await this.renderModel.release(); // Release the instance. + if (this.renderModel.state === audio.AudioState.STATE_RELEASED) { + console.info('Renderer released'); + } else { + console.error('Renderer release failed.'); + } + } +} +``` + +## Using AudioCapturer to Record the Local Voice + +This process is similar to the process of [using AudioCapturer to develop audio recording](using-audiocapturer-for-recording.md). The key differences lie in the **audioCapturerInfo** parameter and audio data stream direction. In the **audioCapturerInfo** parameter used for audio calling, **source** must be set to **SOURCE_TYPE_VOICE_COMMUNICATION**. + +```ts +import audio from '@ohos.multimedia.audio'; +import fs from '@ohos.file.fs'; +const TAG = 'VoiceCallDemoForAudioCapturer'; +// The process is similar to the process of using AudioCapturer to develop audio recording. The key differences lie in the audioCapturerInfo parameter and audio data stream direction. +export default class VoiceCallDemoForAudioCapturer { + private audioCapturer = undefined; + private audioStreamInfo = { + samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100, // Sampling rate. + channels: audio.AudioChannel.CHANNEL_1, // Channel. + sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, // Sampling format. + encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW // Encoding format. + } + private audioCapturerInfo = { + // Parameters corresponding to the call scenario need to be used. + source: audio.SourceType.SOURCE_TYPE_VOICE_COMMUNICATION, // Audio source type: voice communication. + capturerFlags: 0 // AudioCapturer flag. The default value is 0. + } + private audioCapturerOptions = { + streamInfo: this.audioStreamInfo, + capturerInfo: this.audioCapturerInfo + } + // Create an AudioCapturer instance, and set the events to listen for. + init() { + audio.createAudioCapturer(this.audioCapturerOptions, (err, capturer) => { // Create an AudioCapturer instance. + if (err) { + console.error(`Invoke createAudioCapturer failed, code is ${err.code}, message is ${err.message}`); + return; + } + console.info(`${TAG}: create AudioCapturer success`); + this.audioCapturer = capturer; + this.audioCapturer.on('markReach', 1000, (position) => { // Subscribe to the markReach event. A callback is triggered when the number of captured frames reaches 1000. + if (position === 1000) { + console.info('ON Triggered successfully'); + } + }); + this.audioCapturer.on('periodReach', 2000, (position) => { // Subscribe to the periodReach event. A callback is triggered when the number of captured frames reaches 2000. + if (position === 2000) { + console.info('ON Triggered successfully'); + } + }); + }); + } + // Start audio recording. + async start() { + let stateGroup = [audio.AudioState.STATE_PREPARED, audio.AudioState.STATE_PAUSED, audio.AudioState.STATE_STOPPED]; + if (stateGroup.indexOf(this.audioCapturer.state) === -1) { // Recording can be started only when the AudioRenderer is in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state. + console.error(`${TAG}: start failed`); + return; + } + await this.audioCapturer.start(); // Start recording. + // The following describes how to write audio data to a file. In actual audio call development, the local audio data needs to be encoded and packed, and then sent to the peer through the network. + let context = getContext(this); + const path = context.filesDir + '/voice_call_data.wav'; // Path for storing the recorded audio file. + let file = fs.openSync(path, 0o2 | 0o100); // Create the file if it does not exist. + let fd = file.fd; + let numBuffersToCapture = 150; // Write data for 150 times. + let count = 0; + while (numBuffersToCapture) { + let bufferSize = await this.audioCapturer.getBufferSize(); + let buffer = await this.audioCapturer.read(bufferSize, true); + let options = { + offset: count * bufferSize, + length: bufferSize + }; + if (buffer === undefined) { + console.error(`${TAG}: read buffer failed`); + } else { + let number = fs.writeSync(fd, buffer, options); + console.info(`${TAG}: write date: ${number}`); + } + numBuffersToCapture--; + count++; + } + } + // Stop recording. + async stop() { + // The AudioCapturer can be stopped only when it is in STATE_RUNNING or STATE_PAUSED state. + if (this.audioCapturer.state !== audio.AudioState.STATE_RUNNING && this.audioCapturer.state !== audio.AudioState.STATE_PAUSED) { + console.info('Capturer is not running or paused'); + return; + } + await this.audioCapturer.stop(); // Stop recording. + if (this.audioCapturer.state === audio.AudioState.STATE_STOPPED) { + console.info('Capturer stopped'); + } else { + console.error('Capturer stop failed'); + } + } + // Release the instance. + async release() { + // The AudioCapturer can be released only when it is not in the STATE_RELEASED or STATE_NEW state. + if (this.audioCapturer.state === audio.AudioState.STATE_RELEASED || this.audioCapturer.state === audio.AudioState.STATE_NEW) { + console.info('Capturer already released'); + return; + } + await this.audioCapturer.release(); // Release the instance. + if (this.audioCapturer.state == audio.AudioState.STATE_RELEASED) { + console.info('Capturer released'); + } else { + console.error('Capturer release failed'); + } + } +} +``` diff --git a/en/application-dev/media/audio-call-overview.md b/en/application-dev/media/audio-call-overview.md new file mode 100644 index 0000000000000000000000000000000000000000..1462198c201203da3eecc902de556c005ad3aae9 --- /dev/null +++ b/en/application-dev/media/audio-call-overview.md @@ -0,0 +1,49 @@ +# Audio Call Development + +Typically, audio calls are classified into VoIP calls and cellular calls. + +- Voice over Internet Protocol (VoIP) is a technology that enables you to make voice calls using a broadband Internet connection. During a VoIP call, call information is packed into data packets and transmitted over the network. Therefore, the VoIP call has high requirements on the network quality, and the call quality is closely related to the network connection speed. + +- Cellular call refers to the traditional telephony service provided by carriers. Currently, APIs for developing cellular calling are available only for system applications. + +When developing the audio call service, you must use a proper audio processing policy based on the [audio scene](#audio-scene) and [ringer mode](#ringer-mode). + +## Audio Scene + +When an application uses the audio call service, the system switches to the call-related audio scene (specified by [AudioScene](../reference/apis/js-apis-audio.md#audioscene8)). The system has preset multiple audio scenes, including ringing, cellular call, and voice chat, and uses a scene-specific policy to process audio. + +For example, in the cellular call audio scene, the system prioritizes voice clarity. To deliver a crystal clear voice during calls, the system uses the 3A algorithm to preprocess audio data, suppress echoes, eliminates background noise, and adjusts the volume range. The 3A algorithm refers to three audio processing algorithms: Acoustic Echo Cancellation (AEC), Active Noise Control (ANC), and Automatic Gain Control (AGC). + +Currently, the following audio scenes are preset: + +- **AUDIO_SCENE_DEFAULT**: default audio scene, which can be used in all scenarios except audio calls. + +- **AUDIO_SCENE_RINGING**: ringing audio scene, which is used when a call is coming and is open only to system applications. + +- **AUDIO_SCENE_PHONE_CALL**: cellular call audio scene, which is used for cellular calls and is open only to system applications. + +- **AUDIO_SCENE_VOICE_CHAT**: voice chat scene, which is used for VoIP calls. + +The application can call **getAudioScene** in the [AudioManager](../reference/apis/js-apis-audio.md#audiomanager) class to obtain the audio scene in use. Before starting or stopping using the audio call service, the application can call this API to check whether the system has switched to the suitable audio scene. + +## Ringer Mode + +When an audio call is coming, the application notifies the user by playing a ringtone or vibrating, depending on the setting of [AudioRingMode](../reference/apis/js-apis-audio.md#audioringmode). + +The system has preset the following ringer modes: + +- **RINGER_MODE_SILENT**: silent mode, in which no sound is played when a call is coming in. + +- **RINGER_MODE_VIBRATE**: vibration mode, in which no sound is played but the device vibrates when a call is coming in. + +- **RINGER_MODE_NORMAL**: normal mode, in which a ringtone is played when a call is coming in. + +The application can call **getRingerMode** in the [AudioVolumeGroupManager](../reference/apis/js-apis-audio.md#audiovolumegroupmanager9) class to obtain the ringer mode in use so as to use a proper policy to notify the user. + +If the application wants to obtain the ringer mode changes in time, it can call **on('ringerModeChange')** in the **AudioVolumeGroupManager** class to listen for the changes. When the ringer mode changes, it will receive a notification and can make adjustment accordingly. + +## Audio Device Switching During a Call + +When a call is coming, the system selects an appropriate audio device based on the default priority. The application can switch the call to another audio device as required. + +The audio devices that can be used for the audio call are specified by [CommunicationDeviceType](../reference/apis/js-apis-audio.md#communicationdevicetype9). The application can call **isCommunicationDeviceActive** in the [AudioRoutingManager](../reference/apis/js-apis-audio.md#audioroutingmanager9) class to check whether a communication device is active. It can also call **setCommunicationDevice** in the **AudioRoutingManager** class to set a communication device to the active state so that the device can be used for the call. diff --git a/en/application-dev/media/audio-capturer.md b/en/application-dev/media/audio-capturer.md deleted file mode 100644 index f7b01ce2a387af3471b297de329fe3267b9e9785..0000000000000000000000000000000000000000 --- a/en/application-dev/media/audio-capturer.md +++ /dev/null @@ -1,258 +0,0 @@ -# Audio Capture Development - -## Introduction - -You can use the APIs provided by **AudioCapturer** to record raw audio files, thereby implementing audio data collection. - -**Status check**: During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioCapturer** instance. This is because some operations can be performed only when the audio capturer is in a given state. If the application performs an operation when the audio capturer is not in the given state, the system may throw an exception or generate other undefined behavior. - -## Working Principles - -This following figure shows the audio capturer state transitions. - -**Figure 1** Audio capturer state transitions - -![audio-capturer-state](figures/audio-capturer-state.png) - -- **PREPARED**: The audio capturer enters this state by calling **create()**. -- **RUNNING**: The audio capturer enters this state by calling **start()** when it is in the **PREPARED** state or by calling **start()** when it is in the **STOPPED** state. -- **STOPPED**: The audio capturer in the **RUNNING** state can call **stop()** to stop playing audio data. -- **RELEASED**: The audio capturer in the **PREPARED** or **STOPPED** state can use **release()** to release all occupied hardware and software resources. It will not transit to any other state after it enters the **RELEASED** state. - -## Constraints - -Before developing the audio data collection feature, configure the **ohos.permission.MICROPHONE** permission for your application. For details, see [Permission Application Guide](../security/accesstoken-guidelines.md#declaring-permissions-in-the-configuration-file). - -## How to Develop - -For details about the APIs, see [AudioCapturer in Audio Management](../reference/apis/js-apis-audio.md#audiocapturer8). - -1. Use **createAudioCapturer()** to create a global **AudioCapturer** instance. - - Set parameters of the **AudioCapturer** instance in **audioCapturerOptions**. This instance is used to capture audio, control and obtain the recording state, and register a callback for notification. - - ```js - import audio from '@ohos.multimedia.audio'; - import fs from '@ohos.file.fs'; // It will be used for the call of the read function in step 3. - - // Perform a self-test on APIs related to audio rendering. - @Entry - @Component - struct AudioRenderer { - @State message: string = 'Hello World' - private audioCapturer: audio.AudioCapturer; // It will be called globally. - - async initAudioCapturer(){ - let audioStreamInfo = { - samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100, - channels: audio.AudioChannel.CHANNEL_1, - sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, - encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW - } - - let audioCapturerInfo = { - source: audio.SourceType.SOURCE_TYPE_MIC, - capturerFlags: 0 // 0 is the extended flag bit of the audio capturer. The default value is 0. - } - - let audioCapturerOptions = { - streamInfo: audioStreamInfo, - capturerInfo: audioCapturerInfo - } - - this.audioCapturer = await audio.createAudioCapturer(audioCapturerOptions); - console.log('AudioRecLog: Create audio capturer success.'); - } - - ``` - -2. Use **start()** to start audio recording. - - The capturer state will be **STATE_RUNNING** once the audio capturer is started. The application can then begin reading buffers. - - ```js - async startCapturer() { - let state = this.audioCapturer.state; - // The audio capturer should be in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state after being started. - if (state == audio.AudioState.STATE_PREPARED || state == audio.AudioState.STATE_PAUSED || - state == audio.AudioState.STATE_STOPPED) { - await this.audioCapturer.start(); - state = this.audioCapturer.state; - if (state == audio.AudioState.STATE_RUNNING) { - console.info('AudioRecLog: Capturer started'); - } else { - console.error('AudioRecLog: Capturer start failed'); - } - } - } - ``` - -3. Read the captured audio data and convert it to a byte stream. Call **read()** repeatedly to read the data until the application stops the recording. - - The following example shows how to write recorded data into a file. - - ```js - async readData(){ - let state = this.audioCapturer.state; - // The read operation can be performed only when the state is STATE_RUNNING. - if (state != audio.AudioState.STATE_RUNNING) { - console.info('Capturer is not in a correct state to read'); - return; - } - const path = '/data/data/.pulse_dir/capture_js.wav'; // Path for storing the collected audio file. - let file = fs.openSync(path, 0o2); - let fd = file.fd; - if (file !== null) { - console.info('AudioRecLog: file created'); - } else { - console.info('AudioRecLog: file create : FAILED'); - return; - } - if (fd !== null) { - console.info('AudioRecLog: file fd opened in append mode'); - } - let numBuffersToCapture = 150; // Write data for 150 times. - let count = 0; - while (numBuffersToCapture) { - this.bufferSize = await this.audioCapturer.getBufferSize(); - let buffer = await this.audioCapturer.read(this.bufferSize, true); - let options = { - offset: count * this.bufferSize, - length: this.bufferSize - } - if (typeof(buffer) == undefined) { - console.info('AudioRecLog: read buffer failed'); - } else { - let number = fs.writeSync(fd, buffer, options); - console.info(`AudioRecLog: data written: ${number}`); - } - numBuffersToCapture--; - count++; - } - } - ``` - -4. Once the recording is complete, call **stop()** to stop the recording. - - ```js - async StopCapturer() { - let state = this.audioCapturer.state; - // The audio capturer can be stopped only when it is in STATE_RUNNING or STATE_PAUSED state. - if (state != audio.AudioState.STATE_RUNNING && state != audio.AudioState.STATE_PAUSED) { - console.info('AudioRecLog: Capturer is not running or paused'); - return; - } - - await this.audioCapturer.stop(); - - state = this.audioCapturer.state; - if (state == audio.AudioState.STATE_STOPPED) { - console.info('AudioRecLog: Capturer stopped'); - } else { - console.error('AudioRecLog: Capturer stop failed'); - } - } - ``` - -5. After the task is complete, call **release()** to release related resources. - - ```js - async releaseCapturer() { - let state = this.audioCapturer.state; - // The audio capturer can be released only when it is not in the STATE_RELEASED or STATE_NEW state. - if (state == audio.AudioState.STATE_RELEASED || state == audio.AudioState.STATE_NEW) { - console.info('AudioRecLog: Capturer already released'); - return; - } - - await this.audioCapturer.release(); - - state = this.audioCapturer.state; - if (state == audio.AudioState.STATE_RELEASED) { - console.info('AudioRecLog: Capturer released'); - } else { - console.info('AudioRecLog: Capturer release failed'); - } - } - ``` - -6. (Optional) Obtain the audio capturer information. - - You can use the following code to obtain the audio capturer information: - - ```js - async getAudioCapturerInfo(){ - // Obtain the audio capturer state. - let state = this.audioCapturer.state; - // Obtain the audio capturer information. - let audioCapturerInfo : audio.AudioCapturerInfo = await this.audioCapturer.getCapturerInfo(); - // Obtain the audio stream information. - let audioStreamInfo : audio.AudioStreamInfo = await this.audioCapturer.getStreamInfo(); - // Obtain the audio stream ID. - let audioStreamId : number = await this.audioCapturer.getAudioStreamId(); - // Obtain the Unix timestamp, in nanoseconds. - let audioTime : number = await this.audioCapturer.getAudioTime(); - // Obtain a proper minimum buffer size. - let bufferSize : number = await this.audioCapturer.getBufferSize(); - } - ``` - -7. (Optional) Use **on('markReach')** to subscribe to the mark reached event, and use **off('markReach')** to unsubscribe from the event. - - After the mark reached event is subscribed to, when the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned. - - ```js - async markReach(){ - this.audioCapturer.on('markReach', 10, (reachNumber) => { - console.info('Mark reach event Received'); - console.info(`The Capturer reached frame: ${reachNumber}`); - }); - this.audioCapturer.off('markReach'); // Unsubscribe from the mark reached event. This event will no longer be listened for. - } - ``` - -8. (Optional) Use **on('periodReach')** to subscribe to the period reached event, and use **off('periodReach')** to unsubscribe from the event. - - After the period reached event is subscribed to, each time the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned. - - ```js - async periodReach(){ - this.audioCapturer.on('periodReach', 10, (reachNumber) => { - console.info('Period reach event Received'); - console.info(`In this period, the Capturer reached frame: ${reachNumber}`); - }); - this.audioCapturer.off('periodReach'); // Unsubscribe from the period reached event. This event will no longer be listened for. - } - ``` - -9. If your application needs to perform some operations when the audio capturer state is updated, it can subscribe to the state change event. When the audio capturer state is updated, the application receives a callback containing the event type. - - ```js - async stateChange(){ - this.audioCapturer.on('stateChange', (state) => { - console.info(`AudioCapturerLog: Changed State to : ${state}`) - switch (state) { - case audio.AudioState.STATE_PREPARED: - console.info('--------CHANGE IN AUDIO STATE----------PREPARED--------------'); - console.info('Audio State is : Prepared'); - break; - case audio.AudioState.STATE_RUNNING: - console.info('--------CHANGE IN AUDIO STATE----------RUNNING--------------'); - console.info('Audio State is : Running'); - break; - case audio.AudioState.STATE_STOPPED: - console.info('--------CHANGE IN AUDIO STATE----------STOPPED--------------'); - console.info('Audio State is : stopped'); - break; - case audio.AudioState.STATE_RELEASED: - console.info('--------CHANGE IN AUDIO STATE----------RELEASED--------------'); - console.info('Audio State is : released'); - break; - default: - console.info('--------CHANGE IN AUDIO STATE----------INVALID--------------'); - console.info('Audio State is : invalid'); - break; - } - }); - } - ``` diff --git a/en/application-dev/media/audio-input-device-management.md b/en/application-dev/media/audio-input-device-management.md new file mode 100644 index 0000000000000000000000000000000000000000..ebdadfaad7a9316cf055d3216ac3a94a1b052a33 --- /dev/null +++ b/en/application-dev/media/audio-input-device-management.md @@ -0,0 +1,88 @@ +# Audio Input Device Management + +If a device is connected to multiple audio input devices, you can use **AudioRoutingManager** to specify an audio input device to record audio. For details about the API reference, see [AudioRoutingManager](../reference/apis/js-apis-audio.md#audioroutingmanager9). + +## Creating an AudioRoutingManager Instance + +Before using **AudioRoutingManager** to manage audio devices, import the audio module and create an **AudioManager** instance. + +```ts +import audio from '@ohos.multimedia.audio'; // Import the audio module. + +let audioManager = audio.getAudioManager(); // Create an AudioManager instance. + +let audioRoutingManager = audioManager.getRoutingManager(); // Call an API of AudioManager to create an AudioRoutingManager instance. +``` + +## Supported Audio Input Device Types + +The table below lists the supported audio input devices. + +| Name| Value| Description| +| -------- | -------- | -------- | +| WIRED_HEADSET | 3 | Wired headset with a microphone.| +| BLUETOOTH_SCO | 7 | Bluetooth device using Synchronous Connection Oriented (SCO) links.| +| MIC | 15 | Microphone.| +| USB_HEADSET | 22 | USB Type-C headset.| + +## Obtaining Input Device Information + +Use **getDevices()** to obtain information about all the input devices. + +```ts +audioRoutingManager.getDevices(audio.DeviceFlag.INPUT_DEVICES_FLAG).then((data) => { + console.info('Promise returned to indicate that the device list is obtained.'); +}); +``` + +## Listening for Device Connection State Changes + +Set a listener to listen for changes of the device connection state. When a device is connected or disconnected, a callback is triggered. + +```ts +// Listen for connection state changes of audio devices. +audioRoutingManager.on('deviceChange', audio.DeviceFlag.INPUT_DEVICES_FLAG, (deviceChanged) => { + console.info('device change type: ' + deviceChanged.type); // Device connection state change. The value 0 means that the device is connected and 1 means that the device is disconnected. + console.info('device descriptor size : ' + deviceChanged.deviceDescriptors.length); + console.info('device change descriptor: ' + deviceChanged.deviceDescriptors[0].deviceRole); // Device role. + console.info('device change descriptor: ' + deviceChanged.deviceDescriptors[0].deviceType); // Device type. +}); + +// Cancel the listener for the connection state changes of audio devices. +audioRoutingManager.off('deviceChange', (deviceChanged) => { + console.info('Should be no callback.'); +}); +``` + +## Selecting an Audio Input Device (for System Applications only) + +Currently, only one input device can be selected, and the device ID is used as the unique identifier. For details about audio device descriptors, see [AudioDeviceDescriptors](../reference/apis/js-apis-audio.md#audiodevicedescriptors). + +> **NOTE** +> +> The user can connect to a group of audio devices (for example, a pair of Bluetooth headsets), but the system treats them as one device (a group of devices that share the same device ID). + +```ts +let inputAudioDeviceDescriptor = [{ + deviceRole : audio.DeviceRole.INPUT_DEVICE, + deviceType : audio.DeviceType.EARPIECE, + id : 1, + name : "", + address : "", + sampleRates : [44100], + channelCounts : [2], + channelMasks : [0], + networkId : audio.LOCAL_NETWORK_ID, + interruptGroupId : 1, + volumeGroupId : 1, +}]; + +async function getRoutingManager(){ + audioRoutingManager.selectInputDevice(inputAudioDeviceDescriptor).then(() => { + console.info('Invoke selectInputDevice succeeded.'); + }).catch((err) => { + console.error(`Invoke selectInputDevice failed, code is ${err.code}, message is ${err.message}`); + }); +} + +``` diff --git a/en/application-dev/media/audio-interruptmode.md b/en/application-dev/media/audio-interruptmode.md deleted file mode 100644 index 48a53bf5d5990ac88aae1271466a6aa36d52ac98..0000000000000000000000000000000000000000 --- a/en/application-dev/media/audio-interruptmode.md +++ /dev/null @@ -1,55 +0,0 @@ -# Audio Interruption Mode Development - -## Introduction -The audio interruption mode is used to control the playback of multiple audio streams. - -Audio applications can set the audio interruption mode to independent or shared under **AudioRenderer**. - -In shared mode, multiple audio streams share one session ID. In independent mode, each audio stream has an independent session ID. - -**Asynchronous operation**: To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions. - -## How to Develop - -For details about the APIs, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8). - -1. Use **createAudioRenderer()** to create an **AudioRenderer** instance. - - Set parameters of the **AudioRenderer** instance in **audioRendererOptions**. - - This instance is used to render audio, control and obtain the rendering status, and register a callback for notification. - -```js - import audio from '@ohos.multimedia.audio'; - - var audioStreamInfo = { - samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100, - channels: audio.AudioChannel.CHANNEL_1, - sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, - encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW - } - - var audioRendererInfo = { - content: audio.ContentType.CONTENT_TYPE_SPEECH, - usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION, - rendererFlags: 1 - } - - var audioRendererOptions = { - streamInfo: audioStreamInfo, - rendererInfo: audioRendererInfo - } - -let audioRenderer = await audio.createAudioRenderer(audioRendererOptions); - ``` - -2. Set the audio interruption mode. - - After the **AudioRenderer** instance is initialized, you can set the audio interruption mode.
- - ```js - var mode_ = audio.InterruptMode.SHARE_MODE; - await this.audioRenderer.setInterruptMode(mode_).then(() => { - console.log('[JSAR] [SetInterruptMode] Setting: '+ (mode_ == 0? " share mode":"independent mode") + "success"); - }); - ``` diff --git a/en/application-dev/media/audio-output-device-management.md b/en/application-dev/media/audio-output-device-management.md new file mode 100644 index 0000000000000000000000000000000000000000..ad20276c60ce7e535f99778e18d04e4e50e29dc6 --- /dev/null +++ b/en/application-dev/media/audio-output-device-management.md @@ -0,0 +1,90 @@ +# Audio Output Device Management + +If a device is connected to multiple audio output devices, you can use **AudioRoutingManager** to specify an audio output device to play audio. For details about the API reference, see [AudioRoutingManager](../reference/apis/js-apis-audio.md#audioroutingmanager9). + +## Creating an AudioRoutingManager Instance + +Before using **AudioRoutingManager** to manage audio devices, import the audio module and create an **AudioManager** instance. + +```ts +import audio from '@ohos.multimedia.audio'; // Import the audio module. + +let audioManager = audio.getAudioManager(); // Create an AudioManager instance. + +let audioRoutingManager = audioManager.getRoutingManager(); // Call an API of AudioManager to create an AudioRoutingManager instance. +``` + +## Supported Audio Output Device Types + +The table below lists the supported audio output devices. + +| Name| Value| Description| +| -------- | -------- | -------- | +| EARPIECE | 1 | Earpiece.| +| SPEAKER | 2 | Speaker.| +| WIRED_HEADSET | 3 | Wired headset with a microphone.| +| WIRED_HEADPHONES | 4 | Wired headset without microphone.| +| BLUETOOTH_SCO | 7 | Bluetooth device using Synchronous Connection Oriented (SCO) links.| +| BLUETOOTH_A2DP | 8 | Bluetooth device using Advanced Audio Distribution Profile (A2DP) links.| +| USB_HEADSET | 22 | USB Type-C headset.| + +## Obtaining Output Device Information + +Use **getDevices()** to obtain information about all the output devices. + +```ts +audioRoutingManager.getDevices(audio.DeviceFlag.OUTPUT_DEVICES_FLAG).then((data) => { + console.info('Promise returned to indicate that the device list is obtained.'); +}); +``` + +## Listening for Device Connection State Changes + +Set a listener to listen for changes of the device connection state. When a device is connected or disconnected, a callback is triggered. + +```ts +// Listen for connection state changes of audio devices. +audioRoutingManager.on('deviceChange', audio.DeviceFlag.OUTPUT_DEVICES_FLAG, (deviceChanged) => { + console.info('device change type: ' + deviceChanged.type); // Device connection state change. The value 0 means that the device is connected and 1 means that the device is disconnected. + console.info('device descriptor size : ' + deviceChanged.deviceDescriptors.length); + console.info('device change descriptor: ' + deviceChanged.deviceDescriptors[0].deviceRole); // Device role. + console.info('device change descriptor: ' + deviceChanged.deviceDescriptors[0].deviceType); // Device type. +}); + +// Cancel the listener for the connection state changes of audio devices. +audioRoutingManager.off('deviceChange', (deviceChanged) => { + console.info('Should be no callback.'); +}); +``` + +## Selecting an Audio Output Device (for System Applications only) + +Currently, only one output device can be selected, and the device ID is used as the unique identifier. For details about audio device descriptors, see [AudioDeviceDescriptors](../reference/apis/js-apis-audio.md#audiodevicedescriptors). + +> **NOTE** +> +> The user can connect to a group of audio devices (for example, a pair of Bluetooth headsets), but the system treats them as one device (a group of devices that share the same device ID). + +```ts +let outputAudioDeviceDescriptor = [{ + deviceRole : audio.DeviceRole.OUTPUT_DEVICE, + deviceType : audio.DeviceType.SPEAKER, + id : 1, + name : "", + address : "", + sampleRates : [44100], + channelCounts : [2], + channelMasks : [0], + networkId : audio.LOCAL_NETWORK_ID, + interruptGroupId : 1, + volumeGroupId : 1, +}]; + +async function selectOutputDevice(){ + audioRoutingManager.selectOutputDevice(outputAudioDeviceDescriptor).then(() => { + console.info('Invoke selectOutputDevice succeeded.'); + }).catch((err) => { + console.error(`Invoke selectOutputDevice failed, code is ${err.code}, message is ${err.message}`); + }); +} +``` diff --git a/en/application-dev/media/audio-overview.md b/en/application-dev/media/audio-overview.md deleted file mode 100755 index e1fd93eab8238b8ae55c9ce3dff2e807a1585a00..0000000000000000000000000000000000000000 --- a/en/application-dev/media/audio-overview.md +++ /dev/null @@ -1,20 +0,0 @@ -# Audio Overview - -You can use APIs provided by the audio module to implement audio-related features, including audio playback and volume management. - -## Basic Concepts - -- **Sampling** - Sampling is a process to obtain discrete-time signals by extracting samples from analog signals in a continuous time domain at a specific interval. - -- **Sampling rate** - Sampling rate is the number of samples extracted from a continuous signal per second to form a discrete signal. It is measured in Hz. Generally, human hearing range is from 20 Hz to 20 kHz. Common audio sampling rates include 8 kHz, 11.025 kHz, 22.05 kHz, 16 kHz, 37.8 kHz, 44.1 kHz, 48 kHz, 96 kHz, and 192 kHz. - -- **Channel** - Channels refer to different spatial positions where independent audio signals are recorded or played. The number of channels is the number of audio sources used during audio recording, or the number of speakers used for audio playback. - -- **Audio frame** - Audio data is in stream form. For the convenience of audio algorithm processing and transmission, it is generally agreed that a data amount in a unit of 2.5 to 60 milliseconds is one audio frame. This unit is called sampling time, and its length is specific to codecs and the application requirements. - -- **PCM**
- Pulse code modulation (PCM) is a method used to digitally represent sampled analog signals. It converts continuous-time analog signals into discrete-time digital signal samples. diff --git a/en/application-dev/media/audio-playback-concurrency.md b/en/application-dev/media/audio-playback-concurrency.md new file mode 100644 index 0000000000000000000000000000000000000000..0b36594f6bef62c7ba7588bc8977af67609a6c9d --- /dev/null +++ b/en/application-dev/media/audio-playback-concurrency.md @@ -0,0 +1,119 @@ +# Audio Playback Concurrency Policy + +## Audio Interruption Policy + +If multiple audio streams are played at the same time, the user may feel uncomfortable or even painful. To address this issue, OpenHarmony presets the audio interruption policy so that only the audio stream holding audio focus can be played. + +When an application attempts to play an audio, the system requests audio focus for the audio stream. The audio stream that gains the focus can be played. If the request is rejected, the audio stream cannot be played. If the audio stream is interrupted by another, it loses the focus and therefore the playback is paused. All these actions are automatically performed by the system and do not require additional operations on the application. However, to maintain state consistency between the application and the system and ensure good user experience, it is recommended that the application [listen for the audio interruption event](#listening-for-the-audio-interruption-event) and perform the corresponding processing when receiving such an event (specified by [InterruptEvent](../reference/apis/js-apis-audio.md#interruptevent9)). + +OpenHarmony presets two [audio interruption modes](#audio-interruption-mode) to specify whether audio concurrency is controlled by the application or system. You can choose a mode for each of the audio streams created by the same application. + +The audio interruption policy determines the operations (for example, pause, resume, duck, or unduck) to be performed on the audio stream. These operations can be performed by the system or application. To distinguish the body that executes the operations, the [audio interruption type](#audio-interruption-type) is introduced, and two audio interruption types are preset. + +### Audio Interruption Mode + +Two audio interruption modes, specified by [InterruptMode](../reference/apis/js-apis-audio.md#interruptmode9), are preset in the audio interruption policy: + +- **SHARED_MODE**: Multiple audio streams created by an application share one audio focus. The concurrency rules between these audio streams are determined by the application, without the use of the audio interruption policy. However, if another application needs to play audio while one of these audio streams is being played, the audio interruption policy is triggered. + +- **INDEPENDENT_MODE**: Each audio stream created by an application has an independent audio focus. When multiple audio streams are played concurrently, the audio interruption policy is triggered. + +The application can select an audio interruption mode as required. By default, the **SHARED_MODE** is used. + +You can set the audio interruption mode in either of the following ways: + +- If you [use the AVPlayer to develop audio playback](using-avplayer-for-playback.md), set the [audioInterruptMode](../reference/apis/js-apis-media.md#avplayer9) attribute of the AVPlayer to set the audio interruption mode. + +- If you [use the AudioRenderer to develop audio playback](using-audiorenderer-for-playback.md), call [setInterruptMode](../reference/apis/js-apis-audio.md#setinterruptmode9) of the AudioRenderer to set the audio interruption mode. + + +### Audio Interruption Type + +The audio interruption policy (containing two audio interruption modes) determines the operation to be performed on each audio stream. These operations can be carried out by the system or application. To distinguish the executors, the audio interruption type, specified by [InterruptForceType](../reference/apis/js-apis-audio.md#interruptforcetype9), is introduced. + +- **INTERRUPT_FORCE**: The operation is performed by the system. The system forcibly interrupts audio playback. + +- **INTERRUPT_SHARE**: The operation is performed by the application. The application can take action or ignore as required. + +For the pause operation, the **INTERRUPT_FORCE** type is always used and cannot be changed by the application. However, the application can choose to use **INTERRUPT_SHARE** for other operations, such as the resume operation. The application can obtain the audio interruption type based on the value of the member variable **forceType** in the audio interruption event. + +During audio playback, the system automatically requests, holds, and releases the focus for the audio stream. When audio interruption occurs, the system forcibly pauses or stops playing or ducks the volume down for the audio stream, and sends an audio interruption event callback to the application. To maintain state consistency between the application and the system and ensure good user experience, it is recommended that the application [listen for the audio interruption event](#listening-for-the-audio-interruption-event) and perform processing when receiving such an event. + +For operations that cannot be forcibly performed by the system (for example, resume), the system sends the audio interruption event containing **INTERRUPT_SHARE**, and the application can choose to take action or ignore. + +## Listening for the Audio Interruption Event + +Your application are advised to listen for the audio interruption event when playing audio. When audio interruption occurs, the system performs processing on the audio stream according to the preset policy, and sends the audio interruption event to the application. + +Upon the receipt of the event, the application carries out processing based on the event content to ensure that the application state is consistent with the expected effect. + +You can use either of the following methods to listen for the audio interruption event: + +- If you [use the AVPlayer to develop audio playback](using-avplayer-for-playback.md), call [on('audioInterrupt')](../reference/apis/js-apis-media.md#onaudiointerrupt9) of the AVPlayer to listen for the event. + +- If you [use the AudioRenderer to develop audio playback](using-audiorenderer-for-playback.md), call [on('audioInterrupt')](../reference/apis/js-apis-audio.md#onaudiointerrupt9) of the AudioRenderer to listen for the event. + + To deliver an optimal user experience, the application needs to perform processing based on the event content. The following uses the AudioRenderer as an example to describe the recommended application processing. (The recommended processing is similar if the AVPlayer is used to develop audio playback.) You can customize the code to implement your own audio playback functionality or application processing based on service requirements. + +```ts +let isPlay; // An identifier specifying whether the audio stream is being played. In actual development, this parameter corresponds to the module related to the audio playback state. +let isDucked; // An identifier specifying whether to duck the volume down. In actual development, this parameter corresponds to the module related to the audio volume. +let started; // An identifier specifying whether the start operation is successful. + +async function onAudioInterrupt(){ + // The AudioRenderer is used as an example to describe how to develop audio playback. The audioRenderer variable is the AudioRenderer instance created for playback. + audioRenderer.on('audioInterrupt', async(interruptEvent) => { + // When an audio interruption event occurs, the audioRenderer receives the interruptEvent callback and performs processing based on the content in the callback. + // The audioRenderer reads the value of interruptEvent.forceType to see whether the system has forcibly performed the operation. + // The audioRenderer then reads the value of interruptEvent.hintType and performs corresponding processing. + if (interruptEvent.forceType === audio.InterruptForceType.INTERRUPT_FORCE) { + // If the value of interruptEvent.forceType is INTERRUPT_FORCE, the system has performed audio-related processing, and the application needs to update its state and make adjustments accordingly. + switch (interruptEvent.hintType) { + case audio.InterruptHint.INTERRUPT_HINT_PAUSE: + // The system has paused the audio stream (the focus is temporarily lost). To ensure state consistency, the application needs to switch to the audio paused state. + // Temporarily losing the focus: After the other audio stream releases the focus, the current audio stream will receive the audio interruption event corresponding to resume and automatically resume the playback. + isPlay = false; // A simplified processing indicating several operations for switching the application to the audio paused state. + break; + case audio.InterruptHint.INTERRUPT_HINT_STOP: + // The system has stopped the audio stream (the focus is permanently lost). To ensure state consistency, the application needs to switch to the audio paused state. + // Permanently losing the focus: No audio interruption event will be received. The user must manually trigger the operation to resume playback. + isPlay = false; // A simplified processing indicating several operations for switching the application to the audio paused state. + break; + case audio.InterruptHint.INTERRUPT_HINT_DUCK: + // The system has ducked the volume down (20% of the normal volume by default). To ensure state consistency, the application needs to switch to the volume decreased state. + // If the application does not want to play at a lower volume, it can select another processing mode, for example, proactively pausing the playback. + isDucked = true; // A simplified processing indicating several operations for switching the application to the volume decreased state. + break; + case audio.InterruptHint.INTERRUPT_HINT_UNDUCK: + // The system has restored the audio volume to normal. To ensure state consistency, the application needs to switch to the normal volume state. + isDucked = false; // A simplified processing indicating several operations for switching the application to the normal volume state. + break; + default: + break; + } + } else if (interruptEvent.forceType === audio.InterruptForceType.INTERRUPT_SHARE) { + // If the value of interruptEvent.forceType is INTERRUPT_SHARE, the application can take action or ignore as required. + switch (interruptEvent.hintType) { + case audio.InterruptHint.INTERRUPT_HINT_RESUME: + // The paused audio stream can be played. It is recommended that the application continue to play the audio stream and switch to the audio playing state. + // If the application does not want to continue the playback, it can ignore the event. + // To continue the playback, the application needs to call start(), and use the identifier variable started to record the execution result of start(). + await audioRenderer.start().then(async function () { + started = true; // Calling start() is successful. + }).catch((err) => { + started = false; // Calling start() fails. + }); + // If calling start() is successful, the application needs to switch to the audio playing state. + if (started) { + isPlay = true; // A simplified processing indicating several operations for switching the application to the audio playing state. + } else { + // Resuming the audio playback fails. + } + break; + default: + break; + } + } + }); +} +``` diff --git a/en/application-dev/media/audio-playback-overview.md b/en/application-dev/media/audio-playback-overview.md new file mode 100644 index 0000000000000000000000000000000000000000..d17970d6de9b8b238db74d971ad5f58c605462eb --- /dev/null +++ b/en/application-dev/media/audio-playback-overview.md @@ -0,0 +1,25 @@ +# Audio Playback Development + +## Selecting an Audio Playback Development Mode + +OpenHarmony provides multiple classes for you to develop audio playback applications. You can select them based on the audio data formats, audio sources, audio usage scenarios, and even the programming language you use. Selecting a suitable class helps you reduce development workload and your application deliver a better effect. + +- [AVPlayer](using-avplayer-for-playback.md): provides ArkTS and JS APIs to implement audio and video playback. It also supports parsing streaming media and local assets, decapsulating media assets, decoding audio, and outputting audio. It can play audio files in MP3 and M4A formats, but not in PCM format. + +- [AudioRenderer](using-audiorenderer-for-playback.md): provides ArkTS and JS API to implement audio output. It supports only the PCM format and requires applications to continuously write audio data. The applications can perform data preprocessing, for example, setting the sampling rate and bit width of audio files, before audio input. This class can be used to develop more professional and diverse playback applications. To use this class, you must have basic audio processing knowledge. + +- [OpenSLES](using-opensl-es-for-playback.md): provides a set of standard, cross-platform, yet unique native audio APIs. It supports audio output in PCM format and is applicable to playback applications that are ported from other embedded platforms or that implements audio output at the native layer. + +- [TonePlayer](using-toneplayer-for-playback.md): provides ArkTS and JS API to implement the playback of dialing tones and ringback tones. It can be used to play the content selected from a fixed type range, without requiring the input of media assets or audio data. This class is application to specific scenarios where dialing tones and ringback tones are played. is available only to system applications. + +- Applications often need to use short sound effects, such as camera shutter sound effect, key press sound effect, and game shooting sound effect. Currently, only the **AVPlayer** class can implement audio file playback. More APIs will be provided to support this scenario in later versions. + +## Precautions for Developing Audio Playback Applications + +To enable your application to play a video in the background or when the screen is off, the application must meet the following conditions: + +1. The application is registered with the system for unified management through the **AVSession** APIs. Otherwise, the playback will be forcibly stopped when the application switches to the background. For details, see [AVSession Development](avsession-overview.md). + +2. The application must request a continuous task to prevent from being suspended. For details, see [Continuous Task Development](../task-management/continuous-task-dev-guide.md). + +If the playback is interrupted when the application switches to the background, you can view the log to see whether the application has requested a continuous task. If the application has requested a continuous task, there is no log recording **pause id**; otherwise, there is a log recording **pause id**. diff --git a/en/application-dev/media/audio-playback-stream-management.md b/en/application-dev/media/audio-playback-stream-management.md new file mode 100644 index 0000000000000000000000000000000000000000..c6cf398b8403b3f799a1db20716021c91ca6e078 --- /dev/null +++ b/en/application-dev/media/audio-playback-stream-management.md @@ -0,0 +1,120 @@ +# Audio Playback Stream Management + +An audio playback application must notice audio stream state changes and perform corresponding operations. For example, when detecting that an audio stream is being played or paused, the application must change the UI display of the **Play** button. + +## Reading or Listening for Audio Stream State Changes in the Application + +Create an AudioRenderer by referring to [Using AudioRenderer for Audio Playback](using-audiorenderer-for-playback.md) or [audio.createAudioRenderer](../reference/apis/js-apis-audio.md#audiocreateaudiorenderer8). Then obtain the audio stream state changes in either of the following ways: + +- Check the [state](../reference/apis/js-apis-audio.md#attributes) of the AudioRenderer. + + ```ts + let audioRendererState = audioRenderer.state; + console.info(`Current state is: ${audioRendererState }`) + ``` + +- Register **stateChange** to listen for state changes of the AudioRenderer. + + ```ts + audioRenderer.on('stateChange', (rendererState) => { + console.info(`State change to: ${rendererState}`) + }); + ``` + +The application then performs an operation, for example, changing the display of the **Play** button, by comparing the obtained state with [AudioState](../reference/apis/js-apis-audio.md#audiostate8). + +## Reading or Listening for Changes in All Audio Streams + +If an application needs to obtain the change information about all audio streams, it can use **AudioStreamManager** to read or listen for the changes of all audio streams. + +> **NOTE** +> +> The audio stream change information marked as the system API can be viewed only by system applications. + +The figure below shows the call relationship of audio stream management. + +![Call relationship of audio stream management](figures/audio-stream-mgmt-invoking-relationship.png) + +During application development, first use **getStreamManager()** to create an **AudioStreamManager** instance. Then call **on('audioRendererChange')** to listen for audio stream changes and obtain a notification when the audio stream state or device changes. To cancel the listening for these changes, call **off('audioRendererChange')**. You can also call **getCurrentAudioRendererInfoArray()** to obtain information such as the unique ID of the playback stream, UID of the playback stream client, and stream status. + +For details about the APIs, see [AudioStreamManager](../reference/apis/js-apis-audio.md#audiostreammanager9). + +## How to Develop + +1. Create an **AudioStreamManager** instance. + + Before using **AudioStreamManager** APIs, you must use **getStreamManager()** to create an **AudioStreamManager** instance. + + ```ts + import audio from '@ohos.multimedia.audio'; + let audioManager = audio.getAudioManager(); + let audioStreamManager = audioManager.getStreamManager(); + ``` + +2. Use **on('audioRendererChange')** to listen for audio playback stream changes. If the application needs to receive a notification when the audio playback stream state or device changes, it can subscribe to this event. + + ```ts + audioStreamManager.on('audioRendererChange', (AudioRendererChangeInfoArray) => { + for (let i = 0; i < AudioRendererChangeInfoArray.length; i++) { + let AudioRendererChangeInfo = AudioRendererChangeInfoArray[i]; + console.info(`## RendererChange on is called for ${i} ##`); + console.info(`StreamId for ${i} is: ${AudioRendererChangeInfo.streamId}`); + console.info(`Content ${i} is: ${AudioRendererChangeInfo.rendererInfo.content}`); + console.info(`Stream ${i} is: ${AudioRendererChangeInfo.rendererInfo.usage}`); + console.info(`Flag ${i} is: ${AudioRendererChangeInfo.rendererInfo.rendererFlags}`); + for (let j = 0;j < AudioRendererChangeInfo.deviceDescriptors.length; j++) { + console.info(`Id: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].id}`); + console.info(`Type: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].deviceType}`); + console.info(`Role: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].deviceRole}`); + console.info(`Name: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].name}`); + console.info(`Address: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].address}`); + console.info(`SampleRates: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].sampleRates[0]}`); + console.info(`ChannelCount ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].channelCounts[0]}`); + console.info(`ChannelMask: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].channelMasks}`); + } + } + }); + ``` + +3. (Optional) Use **off('audioRendererChange')** to cancel listening for audio playback stream changes. + + ```ts + audioStreamManager.off('audioRendererChange'); + console.info('RendererChange Off is called '); + ``` + +4. (Optional) Call **getCurrentAudioRendererInfoArray()** to obtain the information about all audio playback streams. + + This API can be used to obtain the unique ID of the audio playback stream, UID of the audio playback client, audio status, and other information about the audio player. + > **NOTE** + > + > Before listening for state changes of all audio streams, the application must request the **ohos.permission.USE_BLUETOOTH** [permission](../security/accesstoken-guidelines.md), for the device name and device address (Bluetooth related attributes) to be displayed correctly. + + ```ts + async function getCurrentAudioRendererInfoArray(){ + await audioStreamManager.getCurrentAudioRendererInfoArray().then( function (AudioRendererChangeInfoArray) { + console.info(`getCurrentAudioRendererInfoArray Get Promise is called `); + if (AudioRendererChangeInfoArray != null) { + for (let i = 0; i < AudioRendererChangeInfoArray.length; i++) { + let AudioRendererChangeInfo = AudioRendererChangeInfoArray[i]; + console.info(`StreamId for ${i} is: ${AudioRendererChangeInfo.streamId}`); + console.info(`Content ${i} is: ${AudioRendererChangeInfo.rendererInfo.content}`); + console.info(`Stream ${i} is: ${AudioRendererChangeInfo.rendererInfo.usage}`); + console.info(`Flag ${i} is: ${AudioRendererChangeInfo.rendererInfo.rendererFlags}`); + for (let j = 0;j < AudioRendererChangeInfo.deviceDescriptors.length; j++) { + console.info(`Id: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].id}`); + console.info(`Type: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].deviceType}`); + console.info(`Role: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].deviceRole}`); + console.info(`Name: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].name}`); + console.info(`Address: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].address}`); + console.info(`SampleRates: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].sampleRates[0]}`); + console.info(`ChannelCount ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].channelCounts[0]}`); + console.info(`ChannelMask: ${i} : ${AudioRendererChangeInfo.deviceDescriptors[j].channelMasks}`); + } + } + } + }).catch((err) => { + console.error(`Invoke getCurrentAudioRendererInfoArray failed, code is ${err.code}, message is ${err.message}`); + }); + } + ``` diff --git a/en/application-dev/media/audio-playback.md b/en/application-dev/media/audio-playback.md deleted file mode 100644 index 1c7953d32b8ecee4c0ff34e82ab8d13947ac9271..0000000000000000000000000000000000000000 --- a/en/application-dev/media/audio-playback.md +++ /dev/null @@ -1,243 +0,0 @@ -# Audio Playback Development - -## Introduction - -You can use audio playback APIs to convert audio data into audible analog signals and play the signals using output devices. You can also manage playback tasks. For example, you can control the playback and volume, obtain track information, and release resources. - -## Working Principles - -The following figures show the audio playback state transition and the interaction with external modules for audio playback. - -**Figure 1** Audio playback state transition - -![en-us_image_audio_state_machine](figures/en-us_image_audio_state_machine.png) - -**NOTE**: If the status is **Idle**, setting the **src** attribute does not change the status. In addition, after the **src** attribute is set successfully, you must call **reset()** before setting it to another value. - - - -**Figure 2** Interaction with external modules for audio playback - -![en-us_image_audio_player](figures/en-us_image_audio_player.png) - -**NOTE**: When a third-party application calls the JS interface provided by the JS interface layer to implement a feature, the framework layer invokes the audio component through the media service of the native framework and outputs the audio data decoded by the software to the audio HDI of the hardware interface layer to implement audio playback. - -## How to Develop - -For details about the APIs, see [AudioPlayer in the Media API](../reference/apis/js-apis-media.md#audioplayer). - -> **NOTE** -> -> The method for obtaining the path in the FA model is different from that in the stage model. For details about how to obtain the path, see [Application Sandbox Path Guidelines](../reference/apis/js-apis-fileio.md#guidelines). - -### Full-Process Scenario - -The full audio playback process includes creating an instance, setting the URI, playing audio, seeking to the playback position, setting the volume, pausing playback, obtaining track information, stopping playback, resetting the player, and releasing resources. - -For details about the **src** types supported by **AudioPlayer**, see the [src attribute](../reference/apis/js-apis-media.md#audioplayer_attributes). - -```js -import media from '@ohos.multimedia.media' -import fs from '@ohos.file.fs' - -// Print the stream track information. -function printfDescription(obj) { - for (let item in obj) { - let property = obj[item]; - console.info('audio key is ' + item); - console.info('audio value is ' + property); - } -} - -// Set the player callbacks. -function setCallBack(audioPlayer) { - audioPlayer.on('dataLoad', () => { // Set the 'dataLoad' event callback, which is triggered when the src attribute is set successfully. - console.info('audio set source success'); - audioPlayer.play(); // The play() API can be invoked only after the 'dataLoad' event callback is complete. The 'play' event callback is then triggered. - }); - audioPlayer.on('play', () => { // Set the 'play' event callback. - console.info('audio play success'); - audioPlayer.pause(); // Trigger the 'pause' event callback and pause the playback. - }); - audioPlayer.on('pause', () => { // Set the 'pause' event callback. - console.info('audio pause success'); - audioPlayer.seek(5000); // Trigger the 'timeUpdate' event callback, and seek to 5000 ms for playback. - }); - audioPlayer.on('stop', () => { // Set the 'stop' event callback. - console.info('audio stop success'); - audioPlayer.reset(); // Trigger the 'reset' event callback, and reconfigure the src attribute to switch to the next song. - }); - audioPlayer.on('reset', () => { // Set the 'reset' event callback. - console.info('audio reset success'); - audioPlayer.release(); // Release the AudioPlayer instance. - audioPlayer = undefined; - }); - audioPlayer.on('timeUpdate', (seekDoneTime) => { // Set the 'timeUpdate' event callback. - if (typeof(seekDoneTime) == 'undefined') { - console.info('audio seek fail'); - return; - } - console.info('audio seek success, and seek time is ' + seekDoneTime); - audioPlayer.setVolume(0.5); // Trigger the 'volumeChange' event callback. - }); - audioPlayer.on('volumeChange', () => { // Set the 'volumeChange' event callback. - console.info('audio volumeChange success'); - audioPlayer.getTrackDescription((error, arrlist) => { // Obtain the audio track information in callback mode. - if (typeof (arrlist) != 'undefined') { - for (let i = 0; i < arrlist.length; i++) { - printfDescription(arrlist[i]); - } - } else { - console.log(`audio getTrackDescription fail, error:${error.message}`); - } - audioPlayer.stop(); // Trigger the 'stop' event callback to stop the playback. - }); - }); - audioPlayer.on('finish', () => { // Set the 'finish' event callback, which is triggered when the playback is complete. - console.info('audio play finish'); - }); - audioPlayer.on('error', (error) => { // Set the 'error' event callback. - console.info(`audio error called, errName is ${error.name}`); - console.info(`audio error called, errCode is ${error.code}`); - console.info(`audio error called, errMessage is ${error.message}`); - }); -} - -async function audioPlayerDemo() { - // 1. Create an AudioPlayer instance. - let audioPlayer = media.createAudioPlayer(); - setCallBack(audioPlayer); // Set the event callbacks. - // 2. Set the URI of the audio file. - let fdPath = 'fd://' - let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements. - // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command. - let path = pathDir + '/01.mp3' - let file = await fs.open(path); - fdPath = fdPath + '' + file.fd; - audioPlayer.src = fdPath; // Set the src attribute and trigger the 'dataLoad' event callback. -} -``` - -### Normal Playback Scenario - -```js -import media from '@ohos.multimedia.media' -import fs from '@ohos.file.fs' - -export class AudioDemo { - // Set the player callbacks. - setCallBack(audioPlayer) { - audioPlayer.on('dataLoad', () => { // Set the 'dataLoad' event callback, which is triggered when the src attribute is set successfully. - console.info('audio set source success'); - audioPlayer.play(); // Call the play() API to start the playback and trigger the 'play' event callback. - }); - audioPlayer.on('play', () => { // Set the 'play' event callback. - console.info('audio play success'); - }); - audioPlayer.on('finish', () => { // Set the 'finish' event callback, which is triggered when the playback is complete. - console.info('audio play finish'); - audioPlayer.release(); // Release the AudioPlayer instance. - audioPlayer = undefined; - }); - } - - async audioPlayerDemo() { - let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance. - this.setCallBack(audioPlayer); // Set the event callbacks. - let fdPath = 'fd://' - let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements. - // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command. - let path = pathDir + '/01.mp3' - let file = await fs.open(path); - fdPath = fdPath + '' + file.fd; - audioPlayer.src = fdPath; // Set the src attribute and trigger the 'dataLoad' event callback. - } -} -``` - -### Switching to the Next Song - -```js -import media from '@ohos.multimedia.media' -import fs from '@ohos.file.fs' - -export class AudioDemo { -// Set the player callbacks. - private isNextMusic = false; - setCallBack(audioPlayer) { - audioPlayer.on('dataLoad', () => { // Set the 'dataLoad' event callback, which is triggered when the src attribute is set successfully. - console.info('audio set source success'); - audioPlayer.play(); // Call the play() API to start the playback and trigger the 'play' event callback. - }); - audioPlayer.on('play', () => { // Set the 'play' event callback. - console.info('audio play success'); - audioPlayer.reset(); // Call the reset() API and trigger the 'reset' event callback. - }); - audioPlayer.on('reset', () => { // Set the 'reset' event callback. - console.info('audio play success'); - if (!this.isNextMusic) { // When isNextMusic is false, changing songs is implemented. - this.nextMusic(audioPlayer); // Changing songs is implemented. - } else { - audioPlayer.release(); // Release the AudioPlayer instance. - audioPlayer = undefined; - } - }); - } - - async nextMusic(audioPlayer) { - this.isNextMusic = true; - let nextFdPath = 'fd://' - let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements. - // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\02.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command. - let nextpath = pathDir + '/02.mp3' - let nextFile = await fs.open(nextpath); - nextFdPath = nextFdPath + '' + nextFile.fd; - audioPlayer.src = nextFdPath; // Set the src attribute and trigger the 'dataLoad' event callback. - } - - async audioPlayerDemo() { - let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance. - this.setCallBack(audioPlayer); // Set the event callbacks. - let fdPath = 'fd://' - let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements. - // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command. - let path = pathDir + '/01.mp3' - let file = await fs.open(path); - fdPath = fdPath + '' + file.fd; - audioPlayer.src = fdPath; // Set the src attribute and trigger the 'dataLoad' event callback. - } -} -``` - -### Looping a Song - -```js -import media from '@ohos.multimedia.media' -import fs from '@ohos.file.fs' - -export class AudioDemo { - // Set the player callbacks. - setCallBack(audioPlayer) { - audioPlayer.on('dataLoad', () => { // Set the 'dataLoad' event callback, which is triggered when the src attribute is set successfully. - console.info('audio set source success'); - audioPlayer.loop = true; // Set the loop playback attribute. - audioPlayer.play(); // Call the play() API to start the playback and trigger the 'play' event callback. - }); - audioPlayer.on('play', () => { // Set the 'play' event callback to start loop playback. - console.info('audio play success'); - }); - } - - async audioPlayerDemo() { - let audioPlayer = media.createAudioPlayer(); // Create an AudioPlayer instance. - this.setCallBack(audioPlayer); // Set the event callbacks. - let fdPath = 'fd://' - let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements. - // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\01.mp3 /data/app/el2/100/base/ohos.acts.multimedia.audio.audioplayer/haps/entry/files" command. - let path = pathDir + '/01.mp3' - let file = await fs.open(path); - fdPath = fdPath + '' + file.fd; - audioPlayer.src = fdPath; // Set the src attribute and trigger the 'dataLoad' event callback. - } -} -``` diff --git a/en/application-dev/media/audio-recorder.md b/en/application-dev/media/audio-recorder.md deleted file mode 100644 index 78650a61d0a803811394e623ab0bc46155438ba9..0000000000000000000000000000000000000000 --- a/en/application-dev/media/audio-recorder.md +++ /dev/null @@ -1,197 +0,0 @@ -# Audio Recording Development - -## Introduction - -During audio recording, audio signals are captured, encoded, and saved to files. You can specify parameters such as the sampling rate, number of audio channels, encoding format, encapsulation format, and output file path for audio recording. - -## Working Principles - -The following figures show the audio recording state transition and the interaction with external modules for audio recording. - -**Figure 1** Audio recording state transition - -![en-us_image_audio_recorder_state_machine](figures/en-us_image_audio_recorder_state_machine.png) - - - -**Figure 2** Interaction with external modules for audio recording - -![en-us_image_audio_recorder_zero](figures/en-us_image_audio_recorder_zero.png) - -**NOTE**: When a third-party recording application or recorder calls the JS interface provided by the JS interface layer to implement a feature, the framework layer invokes the audio component through the media service of the native framework to obtain the audio data captured through the audio HDI. The framework layer then encodes the audio data through software and saves the encoded and encapsulated audio data to a file to implement audio recording. - -## Constraints - -Before developing audio recording, configure the **ohos.permission.MICROPHONE** permission for your application. For details about the configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md). - -## How to Develop - -For details about the APIs, see [AudioRecorder in the Media API](../reference/apis/js-apis-media.md#audiorecorder). - -### Full-Process Scenario - -The full audio recording process includes creating an instance, setting recording parameters, starting, pausing, resuming, and stopping recording, and releasing resources. - -```js -import media from '@ohos.multimedia.media' -import mediaLibrary from '@ohos.multimedia.mediaLibrary' -export class AudioRecorderDemo { - private testFdNumber; // Used to save the FD address. - - // Set the callbacks related to audio recording. - setCallBack(audioRecorder) { - audioRecorder.on('prepare', () => { // Set the prepare event callback. - console.log('prepare success'); - audioRecorder.start(); // Call the start API to start recording and trigger the start event callback. - }); - audioRecorder.on('start', () => { // Set the start event callback. - console.log('audio recorder start success'); - audioRecorder.pause(); // Call the pause API to pause recording and trigger the pause event callback. - }); - audioRecorder.on('pause', () => { // Set the pause event callback. - console.log('audio recorder pause success'); - audioRecorder.resume(); // Call the resume API to resume recording and trigger the resume event callback. - }); - audioRecorder.on('resume', () => { // Set the resume event callback. - console.log('audio recorder resume success'); - audioRecorder.stop(); // Call the stop API to stop recording and trigger the stop event callback. - }); - audioRecorder.on('stop', () => { // Set the stop event callback. - console.log('audio recorder stop success'); - audioRecorder.reset(); // Call the reset API to reset the recorder and trigger the reset event callback. - }); - audioRecorder.on('reset', () => { // Set the reset event callback. - console.log('audio recorder reset success'); - audioRecorder.release(); // Call the release API to release resources and trigger the release event callback. - }); - audioRecorder.on('release', () => { // Set the release event callback. - console.log('audio recorder release success'); - audioRecorder = undefined; - }); - audioRecorder.on('error', (error) => { // Set the error event callback. - console.info(`audio error called, errName is ${error.name}`); - console.info(`audio error called, errCode is ${error.code}`); - console.info(`audio error called, errMessage is ${error.message}`); - }); - } - - // pathName indicates the passed recording file name, for example, 01.mp3. The generated file address is /storage/media/100/local/files/Video/01.mp3. - // To use the media library, declare the following permissions: ohos.permission.MEDIA_LOCATION, ohos.permission.WRITE_MEDIA, and ohos.permission.READ_MEDIA. - async getFd(pathName) { - let displayName = pathName; - const mediaTest = mediaLibrary.getMediaLibrary(); - let fileKeyObj = mediaLibrary.FileKey; - let mediaType = mediaLibrary.MediaType.VIDEO; - let publicPath = await mediaTest.getPublicDirectory(mediaLibrary.DirectoryType.DIR_VIDEO); - let dataUri = await mediaTest.createAsset(mediaType, displayName, publicPath); - if (dataUri != undefined) { - let args = dataUri.id.toString(); - let fetchOp = { - selections : fileKeyObj.ID + "=?", - selectionArgs : [args], - } - let fetchFileResult = await mediaTest.getFileAssets(fetchOp); - let fileAsset = await fetchFileResult.getAllObject(); - let fdNumber = await fileAsset[0].open('Rw'); - this.testFdNumber = "fd://" + fdNumber.toString(); - } - } - - async audioRecorderDemo() { - // 1. Create an AudioRecorder instance. - let audioRecorder = media.createAudioRecorder(); - // 2. Set the callbacks. - this.setCallBack(audioRecorder); - await this.getFd('01.mp3'); // Call the getFd method to obtain the FD address of the file to be recorded. - // 3. Set the recording parameters. - let audioRecorderConfig = { - audioEncodeBitRate : 22050, - audioSampleRate : 22050, - numberOfChannels : 2, - uri : this.testFdNumber, // testFdNumber is generated by getFd. - location : { latitude : 30, longitude : 130}, - audioEncoderMime : media.CodecMimeType.AUDIO_AAC, - fileFormat : media.ContainerFormatType.CFT_MPEG_4A, - } - audioRecorder.prepare(audioRecorderConfig); // Call the prepare method to trigger the prepare event callback. - } -} -``` - -### Normal Recording Scenario - -Unlike the full-process scenario, the normal recording scenario does not include the process of pausing and resuming recording. - -```js -import media from '@ohos.multimedia.media' -import mediaLibrary from '@ohos.multimedia.mediaLibrary' -export class AudioRecorderDemo { - private testFdNumber; // Used to save the FD address. - - // Set the callbacks related to audio recording. - setCallBack(audioRecorder) { - audioRecorder.on('prepare', () => { // Set the prepare event callback. - console.log('prepare success'); - audioRecorder.start(); // Call the start API to start recording and trigger the start event callback. - }); - audioRecorder.on('start', () => { // Set the start event callback. - console.log('audio recorder start success'); - audioRecorder.stop(); // Call the stop API to stop recording and trigger the stop event callback. - }); - audioRecorder.on('stop', () => { // Set the stop event callback. - console.log('audio recorder stop success'); - audioRecorder.release(); // Call the release API to release resources and trigger the release event callback. - }); - audioRecorder.on('release', () => { // Set the release event callback. - console.log('audio recorder release success'); - audioRecorder = undefined; - }); - audioRecorder.on('error', (error) => { // Set the error event callback. - console.info(`audio error called, errName is ${error.name}`); - console.info(`audio error called, errCode is ${error.code}`); - console.info(`audio error called, errMessage is ${error.message}`); - }); - } - - // pathName indicates the passed recording file name, for example, 01.mp3. The generated file address is /storage/media/100/local/files/Video/01.mp3. - // To use the media library, declare the following permissions: ohos.permission.MEDIA_LOCATION, ohos.permission.WRITE_MEDIA, and ohos.permission.READ_MEDIA. - async getFd(pathName) { - let displayName = pathName; - const mediaTest = mediaLibrary.getMediaLibrary(); - let fileKeyObj = mediaLibrary.FileKey; - let mediaType = mediaLibrary.MediaType.VIDEO; - let publicPath = await mediaTest.getPublicDirectory(mediaLibrary.DirectoryType.DIR_VIDEO); - let dataUri = await mediaTest.createAsset(mediaType, displayName, publicPath); - if (dataUri != undefined) { - let args = dataUri.id.toString(); - let fetchOp = { - selections : fileKeyObj.ID + "=?", - selectionArgs : [args], - } - let fetchFileResult = await mediaTest.getFileAssets(fetchOp); - let fileAsset = await fetchFileResult.getAllObject(); - let fdNumber = await fileAsset[0].open('Rw'); - this.testFdNumber = "fd://" + fdNumber.toString(); - } - } - - async audioRecorderDemo() { - // 1. Create an AudioRecorder instance. - let audioRecorder = media.createAudioRecorder(); - // 2. Set the callbacks. - this.setCallBack(audioRecorder); - await this.getFd('01.mp3'); // Call the getFd method to obtain the FD address of the file to be recorded. - // 3. Set the recording parameters. - let audioRecorderConfig = { - audioEncodeBitRate : 22050, - audioSampleRate : 22050, - numberOfChannels : 2, - uri : this.testFdNumber, // testFdNumber is generated by getFd. - location : { latitude : 30, longitude : 130}, - audioEncoderMime : media.CodecMimeType.AUDIO_AAC, - fileFormat : media.ContainerFormatType.CFT_MPEG_4A, - } - audioRecorder.prepare(audioRecorderConfig); // Call the prepare method to trigger the prepare event callback. - } -} -``` diff --git a/en/application-dev/media/audio-recording-overview.md b/en/application-dev/media/audio-recording-overview.md new file mode 100644 index 0000000000000000000000000000000000000000..698255fddd78d98f9e635b16b3db94e6980bd4a0 --- /dev/null +++ b/en/application-dev/media/audio-recording-overview.md @@ -0,0 +1,17 @@ +# Audio Recording Development + +## Selecting an Audio Recording Development Mode + +OpenHarmony provides multiple classes for you to develop audio recording applications. You can select them based on the recording output formats, audio usage scenarios, and even the programming language you use. Selecting a suitable class helps you reduce development workload and your application deliver a better effect. + +- [AVRecorder](using-avrecorder-for-recording.md): provides ArkTS and JS APIs to implement audio and video recording. It also supports audio input, audio encoding, and media encapsulation. You can directly call device hardware, such as microphone, for recording and generate M4A audio files. + +- [AudioCapturer](using-audiocapturer-for-recording.md): provides ArkTS and JS API to implement audio input. It supports only the PCM format and requires applications to continuously read audio data. The application can perform data processing after audio output. This class can be used to develop more professional and diverse recording applications. To use this class, you must have basic audio processing knowledge. + +- [OpenSLES](using-opensl-es-for-recording.md): provides a set of standard, cross-platform, yet unique native audio APIs. It supports audio input in PCM format and is applicable to recording applications that are ported from other embedded platforms or that implements audio input at the native layer. + +## Precautions for Developing Audio Recording Applications + +The application must request the **ohos.permission.MICROPHONE** permission from the user before invoking the microphone to record audio. + +For details about how to request the permission, see [Permission Application Guide](../security/accesstoken-guidelines.md). For details about how to use and manage microphones, see [Microphone Management](mic-management.md). diff --git a/en/application-dev/media/audio-recording-stream-management.md b/en/application-dev/media/audio-recording-stream-management.md new file mode 100644 index 0000000000000000000000000000000000000000..8161d1bd5bbe5fbc55560ab557570baaaa99976a --- /dev/null +++ b/en/application-dev/media/audio-recording-stream-management.md @@ -0,0 +1,118 @@ +# Audio Recording Stream Management + +An audio recording application must notice audio stream state changes and perform corresponding operations. For example, when detecting that the user stops recording, the application must notify the user that the recording finishes. + +## Reading or Listening for Audio Stream State Changes in the Application + +Create an AudioCapturer by referring to [Using AudioCapturer for Audio Recording](using-audiocapturer-for-recording.md) or [audio.createAudioCapturer](../reference/apis/js-apis-audio.md#audiocreateaudiocapturer8). Then obtain the audio stream state changes in either of the following ways: + +- Check the [state](../reference/apis/js-apis-audio.md#attributes) of the AudioCapturer. + + ```ts + let audioCapturerState = audioCapturer.state; + console.info(`Current state is: ${audioCapturerState }`) + ``` + +- Register **stateChange** to listen for state changes of the AudioCapturer. + + ```ts + audioCapturer.on('stateChange', (capturerState) => { + console.info(`State change to: ${capturerState}`) + }); + ``` + +The application then performs an operation, for example, displays a message indicating the end of the recording, by comparing the obtained state with [AudioState](../reference/apis/js-apis-audio.md#audiostate8). + +## Reading or Listening for Changes in All Audio Streams + +If an application needs to obtain the change information about all audio streams, it can use **AudioStreamManager** to read or listen for the changes of all audio streams. + +> **NOTE** +> +> The audio stream change information marked as the system API can be viewed only by system applications. + +The figure below shows the call relationship of audio stream management. + +![Call relationship of recording stream management](figures/invoking-relationship-recording-stream-mgmt.png) + +During application development, first use **getStreamManager()** to create an **AudioStreamManager** instance. Then call **on('audioCapturerChange')** to listen for audio stream changes and obtain a notification when the audio stream state or device changes. To cancel the listening for these changes, call **off('audioCapturerChange')**. You can call **getCurrentAudioCapturerInfoArray()** to obtain information such as the unique ID of the recording stream, UID of the recording stream client, and stream status. + +For details about the APIs, see [AudioStreamManager](../reference/apis/js-apis-audio.md#audiostreammanager9). + + +## How to Develop + +1. Create an **AudioStreamManager** instance. + + Before using **AudioStreamManager** APIs, you must use **getStreamManager()** to create an **AudioStreamManager** instance. + + ```ts + import audio from '@ohos.multimedia.audio'; + let audioManager = audio.getAudioManager(); + let audioStreamManager = audioManager.getStreamManager(); + ``` + +2. Use **on('audioCapturerChange')** to listen for audio recording stream changes. If the application needs to receive a notification when the audio recording stream state or device changes, it can subscribe to this event. + + ```ts + audioStreamManager.on('audioCapturerChange', (AudioCapturerChangeInfoArray) => { + for (let i = 0; i < AudioCapturerChangeInfoArray.length; i++) { + console.info(`## CapChange on is called for element ${i} ##`); + console.info(`StreamId for ${i} is: ${AudioCapturerChangeInfoArray[i].streamId}`); + console.info(`Source for ${i} is: ${AudioCapturerChangeInfoArray[i].capturerInfo.source}`); + console.info(`Flag ${i} is: ${AudioCapturerChangeInfoArray[i].capturerInfo.capturerFlags}`); + let devDescriptor = AudioCapturerChangeInfoArray[i].deviceDescriptors; + for (let j = 0; j < AudioCapturerChangeInfoArray[i].deviceDescriptors.length; j++) { + console.info(`Id: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].id}`); + console.info(`Type: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].deviceType}`); + console.info(`Role: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].deviceRole}`); + console.info(`Name: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].name}`); + console.info(`Address: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].address}`); + console.info(`SampleRates: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].sampleRates[0]}`); + console.info(`ChannelCounts ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].channelCounts[0]}`); + console.info(`ChannelMask: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].channelMasks}`); + } + } + }); + ``` + +3. (Optional) Use **off('audioCapturerChange')** to cancel listening for audio recording stream changes. + + ```ts + audioStreamManager.off('audioCapturerChange'); + console.info('CapturerChange Off is called'); + ``` + +4. (Optional) Call **getCurrentAudioCapturerInfoArray()** to obtain information about the current audio recording stream. + + This API can be used to obtain the unique ID of the audio recording stream, UID of the audio recording client, audio status, and other information about the AudioCapturer. + > **NOTE** + > + > Before listening for state changes of all audio streams, the application must request the **ohos.permission.USE_BLUETOOTH** [permission](../security/accesstoken-guidelines.md), for the device name and device address (Bluetooth related attributes) to be displayed correctly. + + ```ts + async function getCurrentAudioCapturerInfoArray(){ + await audioStreamManager.getCurrentAudioCapturerInfoArray().then( function (AudioCapturerChangeInfoArray) { + console.info('getCurrentAudioCapturerInfoArray Get Promise Called '); + if (AudioCapturerChangeInfoArray != null) { + for (let i = 0; i < AudioCapturerChangeInfoArray.length; i++) { + console.info(`StreamId for ${i} is: ${AudioCapturerChangeInfoArray[i].streamId}`); + console.info(`Source for ${i} is: ${AudioCapturerChangeInfoArray[i].capturerInfo.source}`); + console.info(`Flag ${i} is: ${AudioCapturerChangeInfoArray[i].capturerInfo.capturerFlags}`); + for (let j = 0; j < AudioCapturerChangeInfoArray[i].deviceDescriptors.length; j++) { + console.info(`Id: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].id}`); + console.info(`Type: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].deviceType}`); + console.info(`Role: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].deviceRole}`); + console.info(`Name: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].name}`); + console.info(`Address: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].address}`); + console.info(`SampleRates: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].sampleRates[0]}`); + console.info(`ChannelCounts ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].channelCounts[0]}`); + console.info(`ChannelMask: ${i} : ${AudioCapturerChangeInfoArray[i].deviceDescriptors[j].channelMasks}`); + } + } + } + }).catch((err) => { + console.error(`Invoke getCurrentAudioCapturerInfoArray failed, code is ${err.code}, message is ${err.message}`); + }); + } + ``` diff --git a/en/application-dev/media/audio-renderer.md b/en/application-dev/media/audio-renderer.md deleted file mode 100644 index 0a58ea5251744162d9948c23e75351b298a95bb8..0000000000000000000000000000000000000000 --- a/en/application-dev/media/audio-renderer.md +++ /dev/null @@ -1,522 +0,0 @@ -# Audio Rendering Development - -## Introduction - -**AudioRenderer** provides APIs for rendering audio files and controlling playback. It also supports audio interruption. You can use the APIs provided by **AudioRenderer** to play audio files in output devices and manage playback tasks. -Before calling the APIs, be familiar with the following terms: - -- **Audio interruption**: When an audio stream with a higher priority needs to be played, the audio renderer interrupts the stream with a lower priority. For example, if a call comes in when the user is listening to music, the music playback, which is the lower priority stream, is paused. -- **Status check**: During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioRenderer** instance. This is because some operations can be performed only when the audio renderer is in a given state. If the application performs an operation when the audio renderer is not in the given state, the system may throw an exception or generate other undefined behavior. -- **Asynchronous operation**: To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions. For more information, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8). -- **Audio interruption mode**: OpenHarmony provides two audio interruption modes: **shared mode** and **independent mode**. In shared mode, all **AudioRenderer** instances created by the same application share one focus object, and there is no focus transfer inside the application. Therefore, no callback will be triggered. In independent mode, each **AudioRenderer** instance has an independent focus object, and focus transfer is triggered by focus preemption. When focus transfer occurs, the **AudioRenderer** instance that is having the focus receives a notification through the callback. By default, the shared mode is used. You can call **setInterruptMode()** to switch to the independent mode. - -## Working Principles - -The following figure shows the audio renderer state transitions. - -**Figure 1** Audio renderer state transitions - -![audio-renderer-state](figures/audio-renderer-state.png) - -- **PREPARED**: The audio renderer enters this state by calling **create()**. -- **RUNNING**: The audio renderer enters this state by calling **start()** when it is in the **PREPARED** state or by calling **start()** when it is in the **STOPPED** state. -- **PAUSED**: The audio renderer enters this state by calling **pause()** when it is in the **RUNNING** state. When the audio playback is paused, it can call **start()** to resume the playback. -- **STOPPED**: The audio renderer enters this state by calling **stop()** when it is in the **PAUSED** or **RUNNING** state. -- **RELEASED**: The audio renderer enters this state by calling **release()** when it is in the **PREPARED**, **PAUSED**, or **STOPPED** state. In this state, the audio renderer releases all occupied hardware and software resources and will not transit to any other state. - -## How to Develop - -For details about the APIs, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8). - -1. Use **createAudioRenderer()** to create a global **AudioRenderer** instance. - Set parameters of the **AudioRenderer** instance in **audioRendererOptions**. This instance is used to render audio, control and obtain the rendering status, and register a callback for notification. - - ```js - import audio from '@ohos.multimedia.audio'; - import fs from '@ohos.file.fs'; - - // Perform a self-test on APIs related to audio rendering. - @Entry - @Component - struct AudioRenderer1129 { - private audioRenderer: audio.AudioRenderer; - private bufferSize; // It will be used for the call of the write function in step 3. - private audioRenderer1: audio.AudioRenderer; // It will be used for the call in the complete example in step 14. - private audioRenderer2: audio.AudioRenderer; // It will be used for the call in the complete example in step 14. - - async initAudioRender(){ - let audioStreamInfo = { - samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100, - channels: audio.AudioChannel.CHANNEL_1, - sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, - encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW - } - let audioRendererInfo = { - content: audio.ContentType.CONTENT_TYPE_SPEECH, - usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION, - rendererFlags: 0 // 0 is the extended flag bit of the audio renderer. The default value is 0. - } - let audioRendererOptions = { - streamInfo: audioStreamInfo, - rendererInfo: audioRendererInfo - } - this.audioRenderer = await audio.createAudioRenderer(audioRendererOptions); - console.log("Create audio renderer success."); - } - } - ``` - -2. Use **start()** to start audio rendering. - - ```js - async startRenderer() { - let state = this.audioRenderer.state; - // The audio renderer should be in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state when start() is called. - if (state != audio.AudioState.STATE_PREPARED && state != audio.AudioState.STATE_PAUSED && - state != audio.AudioState.STATE_STOPPED) { - console.info('Renderer is not in a correct state to start'); - return; - } - - await this.audioRenderer.start(); - - state = this.audioRenderer.state; - if (state == audio.AudioState.STATE_RUNNING) { - console.info('Renderer started'); - } else { - console.error('Renderer start failed'); - } - } - ``` - - The renderer state will be **STATE_RUNNING** once the audio renderer is started. The application can then begin reading buffers. - -3. Call **write()** to write data to the buffer. - - Read the audio data to be played to the buffer. Call **write()** repeatedly to write the data to the buffer. Import fs from '@ohos.file.fs'; as step 1. - - ```js - async writeData(){ - // Set a proper buffer size for the audio renderer. You can also select a buffer of another size. - this.bufferSize = await this.audioRenderer.getBufferSize(); - let dir = globalThis.fileDir; // You must use the sandbox path. - const filePath = dir + '/file_example_WAV_2MG.wav'; // The file to render is in the following path: /data/storage/el2/base/haps/entry/files/file_example_WAV_2MG.wav - console.info(`file filePath: ${ filePath}`); - - let file = fs.openSync(filePath, fs.OpenMode.READ_ONLY); - let stat = await fs.stat(filePath); // Music file information. - let buf = new ArrayBuffer(this.bufferSize); - let len = stat.size % this.bufferSize == 0 ? Math.floor(stat.size / this.bufferSize) : Math.floor(stat.size / this.bufferSize + 1); - for (let i = 0;i < len; i++) { - let options = { - offset: i * this.bufferSize, - length: this.bufferSize - } - let readsize = await fs.read(file.fd, buf, options) - let writeSize = await new Promise((resolve,reject)=>{ - this.audioRenderer.write(buf,(err,writeSize)=>{ - if(err){ - reject(err) - }else{ - resolve(writeSize) - } - }) - }) - } - - fs.close(file) - await this.audioRenderer.stop(); // Stop rendering. - await this.audioRenderer.release(); // Release the resources. - } - ``` - -4. (Optional) Call **pause()** or **stop()** to pause or stop rendering. - - ```js - async pauseRenderer() { - let state = this.audioRenderer.state; - // The audio renderer can be paused only when it is in the STATE_RUNNING state. - if (state != audio.AudioState.STATE_RUNNING) { - console.info('Renderer is not running'); - return; - } - - await this.audioRenderer.pause(); - - state = this.audioRenderer.state; - if (state == audio.AudioState.STATE_PAUSED) { - console.info('Renderer paused'); - } else { - console.error('Renderer pause failed'); - } - } - - async stopRenderer() { - let state = this.audioRenderer.state; - // The audio renderer can be stopped only when it is in STATE_RUNNING or STATE_PAUSED state. - if (state != audio.AudioState.STATE_RUNNING && state != audio.AudioState.STATE_PAUSED) { - console.info('Renderer is not running or paused'); - return; - } - - await this.audioRenderer.stop(); - - state = this.audioRenderer.state; - if (state == audio.AudioState.STATE_STOPPED) { - console.info('Renderer stopped'); - } else { - console.error('Renderer stop failed'); - } - } - ``` - -5. (Optional) Call **drain()** to clear the buffer. - - ```js - async drainRenderer() { - let state = this.audioRenderer.state; - // drain() can be used only when the audio renderer is in the STATE_RUNNING state. - if (state != audio.AudioState.STATE_RUNNING) { - console.info('Renderer is not running'); - return; - } - - await this.audioRenderer.drain(); - state = this.audioRenderer.state; - } - ``` - -6. After the task is complete, call **release()** to release related resources. - - **AudioRenderer** uses a large number of system resources. Therefore, ensure that the resources are released after the task is complete. - - ```js - async releaseRenderer() { - let state = this.audioRenderer.state; - // The audio renderer can be released only when it is not in the STATE_RELEASED or STATE_NEW state. - if (state == audio.AudioState.STATE_RELEASED || state == audio.AudioState.STATE_NEW) { - console.info('Renderer already released'); - return; - } - await this.audioRenderer.release(); - - state = this.audioRenderer.state; - if (state == audio.AudioState.STATE_RELEASED) { - console.info('Renderer released'); - } else { - console.info('Renderer release failed'); - } - } - ``` - -7. (Optional) Obtain the audio renderer information. - - You can use the following code to obtain the audio renderer information: - - ```js - async getRenderInfo(){ - // Obtain the audio renderer state. - let state = this.audioRenderer.state; - // Obtain the audio renderer information. - let audioRendererInfo : audio.AudioRendererInfo = await this.audioRenderer.getRendererInfo(); - // Obtain the audio stream information. - let audioStreamInfo : audio.AudioStreamInfo = await this.audioRenderer.getStreamInfo(); - // Obtain the audio stream ID. - let audioStreamId : number = await this.audioRenderer.getAudioStreamId(); - // Obtain the Unix timestamp, in nanoseconds. - let audioTime : number = await this.audioRenderer.getAudioTime(); - // Obtain a proper minimum buffer size. - let bufferSize : number = await this.audioRenderer.getBufferSize(); - // Obtain the audio renderer rate. - let renderRate : audio.AudioRendererRate = await this.audioRenderer.getRenderRate(); - } - ``` - -8. (Optional) Set the audio renderer information. - - You can use the following code to set the audio renderer information: - - ```js - async setAudioRenderInfo(){ - // Set the audio renderer rate to RENDER_RATE_NORMAL. - let renderRate : audio.AudioRendererRate = audio.AudioRendererRate.RENDER_RATE_NORMAL; - await this.audioRenderer.setRenderRate(renderRate); - // Set the interruption mode of the audio renderer to SHARE_MODE. - let interruptMode : audio.InterruptMode = audio.InterruptMode.SHARE_MODE; - await this.audioRenderer.setInterruptMode(interruptMode); - // Set the volume of the stream to 0.5. - let volume : number = 0.5; - await this.audioRenderer.setVolume(volume); - } - ``` - -9. (Optional) Use **on('audioInterrupt')** to subscribe to the audio interruption event, and use **off('audioInterrupt')** to unsubscribe from the event. - - Audio interruption means that Stream A will be interrupted when Stream B with a higher or equal priority requests to become active and use the output device. - - In some cases, the audio renderer performs forcible operations such as pausing and ducking, and notifies the application through **InterruptEvent**. In other cases, the application can choose to act on the **InterruptEvent** or ignore it. - - In the case of audio interruption, the application may encounter write failures. To avoid such failures, interruption-unaware applications can use **audioRenderer.state** to check the audio renderer state before writing audio data. The applications can obtain more details by subscribing to the audio interruption events. For details, see [InterruptEvent](../reference/apis/js-apis-audio.md#interruptevent9). - - It should be noted that the audio interruption event subscription of the **AudioRenderer** module is slightly different from **on('interrupt')** in [AudioManager](../reference/apis/js-apis-audio.md#audiomanager). The **on('interrupt')** and **off('interrupt')** APIs are deprecated since API version 9. In the **AudioRenderer** module, you only need to call **on('audioInterrupt')** to listen for focus change events. When the **AudioRenderer** instance created by the application performs actions such as start, stop, and pause, it requests the focus, which triggers focus transfer and in return enables the related **AudioRenderer** instance to receive a notification through the callback. For instances other than **AudioRenderer**, such as frequency modulation (FM) and voice wakeup, the application does not create an instance. In this case, the application can call **on('interrupt')** in **AudioManager** to receive a focus change notification. - - ```js - async subscribeAudioRender(){ - this.audioRenderer.on('audioInterrupt', (interruptEvent) => { - console.info('InterruptEvent Received'); - console.info(`InterruptType: ${interruptEvent.eventType}`); - console.info(`InterruptForceType: ${interruptEvent.forceType}`); - console.info(`AInterruptHint: ${interruptEvent.hintType}`); - - if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_FORCE) { - switch (interruptEvent.hintType) { - // Forcible pausing initiated by the audio framework. To prevent data loss, stop the write operation. - case audio.InterruptHint.INTERRUPT_HINT_PAUSE: - console.info('isPlay is false'); - break; - // Forcible stopping initiated by the audio framework. To prevent data loss, stop the write operation. - case audio.InterruptHint.INTERRUPT_HINT_STOP: - console.info('isPlay is false'); - break; - // Forcible ducking initiated by the audio framework. - case audio.InterruptHint.INTERRUPT_HINT_DUCK: - break; - // Undocking initiated by the audio framework. - case audio.InterruptHint.INTERRUPT_HINT_UNDUCK: - break; - } - } else if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_SHARE) { - switch (interruptEvent.hintType) { - // Notify the application that the rendering starts. - case audio.InterruptHint.INTERRUPT_HINT_RESUME: - this.startRenderer(); - break; - // Notify the application that the audio stream is interrupted. The application then determines whether to continue. (In this example, the application pauses the rendering.) - case audio.InterruptHint.INTERRUPT_HINT_PAUSE: - console.info('isPlay is false'); - this.pauseRenderer(); - break; - } - } - }); - } - ``` - -10. (Optional) Use **on('markReach')** to subscribe to the mark reached event, and use **off('markReach')** to unsubscribe from the event. - - After the mark reached event is subscribed to, when the number of frames rendered by the audio renderer reaches the specified value, a callback is triggered and the specified value is returned. - - ```js - async markReach(){ - this.audioRenderer.on('markReach', 50, (position) => { - if (position == 50) { - console.info('ON Triggered successfully'); - } - }); - this.audioRenderer.off('markReach'); // Unsubscribe from the mark reached event. This event will no longer be listened for. - } - ``` - -11. (Optional) Use **on('periodReach')** to subscribe to the period reached event, and use **off('periodReach')** to unsubscribe from the event. - - After the period reached event is subscribed to, each time the number of frames rendered by the audio renderer reaches the specified value, a callback is triggered and the specified value is returned. - - ```js - async periodReach(){ - this.audioRenderer.on('periodReach',10, (reachNumber) => { - console.info(`In this period, the renderer reached frame: ${reachNumber} `); - }); - - this.audioRenderer.off('periodReach'); // Unsubscribe from the period reached event. This event will no longer be listened for. - } - ``` - -12. (Optional) Use **on('stateChange')** to subscribe to audio renderer state changes. - - After the **stateChange** event is subscribed to, when the audio renderer state changes, a callback is triggered and the audio renderer state is returned. - - ```js - async stateChange(){ - this.audioRenderer.on('stateChange', (audioState) => { - console.info('State change event Received'); - console.info(`Current renderer state is: ${audioState}`); - }); - } - ``` - -13. (Optional) Handle exceptions of **on()**. - - If the string or the parameter type passed in **on()** is incorrect , the application throws an exception. In this case, you can use **try catch** to capture the exception. - - ```js - async errorCall(){ - try { - this.audioRenderer.on('invalidInput', () => { // The string is invalid. - }) - } catch (err) { - console.info(`Call on function error, ${err}`); // The application throws exception 401. - } - try { - this.audioRenderer.on(1, () => { // The type of the input parameter is incorrect. - }) - } catch (err) { - console.info(`Call on function error, ${err}`); // The application throws exception 6800101. - } - } - ``` - -14. (Optional) Refer to the complete example of **on('audioInterrupt')**. - Declare audioRenderer1 and audioRenderer2 first. For details, see step 1. - Create **AudioRender1** and **AudioRender2** in an application, configure the independent interruption mode, and call **on('audioInterrupt')** to subscribe to audio interruption events. At the beginning, **AudioRender1** has the focus. When **AudioRender2** attempts to obtain the focus, **AudioRender1** receives a focus transfer notification and the related log information is printed. If the shared mode is used, the log information will not be printed during application running. - ```js - async runningAudioRender1(){ - let audioStreamInfo = { - samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_48000, - channels: audio.AudioChannel.CHANNEL_1, - sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S32LE, - encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW - } - let audioRendererInfo = { - content: audio.ContentType.CONTENT_TYPE_MUSIC, - usage: audio.StreamUsage.STREAM_USAGE_MEDIA, - rendererFlags: 0 // 0 is the extended flag bit of the audio renderer. The default value is 0. - } - let audioRendererOptions = { - streamInfo: audioStreamInfo, - rendererInfo: audioRendererInfo - } - - // 1.1 Create an instance. - this.audioRenderer1 = await audio.createAudioRenderer(audioRendererOptions); - console.info("Create audio renderer 1 success."); - - // 1.2 Set the independent mode. - this.audioRenderer1.setInterruptMode(1).then( data => { - console.info('audioRenderer1 setInterruptMode Success!'); - }).catch((err) => { - console.error(`audioRenderer1 setInterruptMode Fail: ${err}`); - }); - - // 1.3 Set the listener. - this.audioRenderer1.on('audioInterrupt', async(interruptEvent) => { - console.info(`audioRenderer1 on audioInterrupt : ${JSON.stringify(interruptEvent)}`) - }); - - // 1.4 Start rendering. - await this.audioRenderer1.start(); - console.info('startAudioRender1 success'); - - // 1.5 Obtain the buffer size, which is the proper minimum buffer size of the audio renderer. You can also select a buffer of another size. - const bufferSize = await this.audioRenderer1.getBufferSize(); - console.info(`audio bufferSize: ${bufferSize}`); - - // 1.6 Obtain the original audio data file. - let dir = globalThis.fileDir; // You must use the sandbox path. - const path1 = dir + '/music001_48000_32_1.wav'; // The file to render is in the following path: /data/storage/el2/base/haps/entry/files/music001_48000_32_1.wav - console.info(`audioRender1 file path: ${ path1}`); - let file1 = fs.openSync(path1, fs.OpenMode.READ_ONLY); - let stat = await fs.stat(path1); // Music file information. - let buf = new ArrayBuffer(bufferSize); - let len = stat.size % this.bufferSize == 0 ? Math.floor(stat.size / this.bufferSize) : Math.floor(stat.size / this.bufferSize + 1); - - // 1.7 Render the original audio data in the buffer by using audioRender. - for (let i = 0;i < len; i++) { - let options = { - offset: i * this.bufferSize, - length: this.bufferSize - } - let readsize = await fs.read(file1.fd, buf, options) - let writeSize = await new Promise((resolve,reject)=>{ - this.audioRenderer1.write(buf,(err,writeSize)=>{ - if(err){ - reject(err) - }else{ - resolve(writeSize) - } - }) - }) - } - fs.close(file1) - await this.audioRenderer1.stop(); // Stop rendering. - await this.audioRenderer1.release(); // Release the resources. - } - - async runningAudioRender2(){ - let audioStreamInfo = { - samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_48000, - channels: audio.AudioChannel.CHANNEL_1, - sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S32LE, - encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW - } - let audioRendererInfo = { - content: audio.ContentType.CONTENT_TYPE_MUSIC, - usage: audio.StreamUsage.STREAM_USAGE_MEDIA, - rendererFlags: 0 // 0 is the extended flag bit of the audio renderer. The default value is 0. - } - let audioRendererOptions = { - streamInfo: audioStreamInfo, - rendererInfo: audioRendererInfo - } - - // 2.1 Create another instance. - this.audioRenderer2 = await audio.createAudioRenderer(audioRendererOptions); - console.info("Create audio renderer 2 success."); - - // 2.2 Set the independent mode. - this.audioRenderer2.setInterruptMode(1).then( data => { - console.info('audioRenderer2 setInterruptMode Success!'); - }).catch((err) => { - console.error(`audioRenderer2 setInterruptMode Fail: ${err}`); - }); - - // 2.3 Set the listener. - this.audioRenderer2.on('audioInterrupt', async(interruptEvent) => { - console.info(`audioRenderer2 on audioInterrupt : ${JSON.stringify(interruptEvent)}`) - }); - - // 2.4 Start rendering. - await this.audioRenderer2.start(); - console.info('startAudioRender2 success'); - - // 2.5 Obtain the buffer size. - const bufferSize = await this.audioRenderer2.getBufferSize(); - console.info(`audio bufferSize: ${bufferSize}`); - - // 2.6 Read the original audio data file. - let dir = globalThis.fileDir; // You must use the sandbox path. - const path2 = dir + '/music002_48000_32_1.wav'; // The file to render is in the following path: /data/storage/el2/base/haps/entry/files/music002_48000_32_1.wav - console.info(`audioRender2 file path: ${ path2}`); - let file2 = fs.openSync(path2, fs.OpenMode.READ_ONLY); - let stat = await fs.stat(path2); // Music file information. - let buf = new ArrayBuffer(bufferSize); - let len = stat.size % this.bufferSize == 0 ? Math.floor(stat.size / this.bufferSize) : Math.floor(stat.size / this.bufferSize + 1); - - // 2.7 Render the original audio data in the buffer by using audioRender. - for (let i = 0;i < len; i++) { - let options = { - offset: i * this.bufferSize, - length: this.bufferSize - } - let readsize = await fs.read(file2.fd, buf, options) - let writeSize = await new Promise((resolve,reject)=>{ - this.audioRenderer2.write(buf,(err,writeSize)=>{ - if(err){ - reject(err) - }else{ - resolve(writeSize) - } - }) - }) - } - fs.close(file2) - await this.audioRenderer2.stop(); // Stop rendering. - await this.audioRenderer2.release(); // Release the resources. - } - - // Integrated invoking entry. - async test(){ - await this.runningAudioRender1(); - await this.runningAudioRender2(); - } - - ``` \ No newline at end of file diff --git a/en/application-dev/media/audio-routing-manager.md b/en/application-dev/media/audio-routing-manager.md deleted file mode 100644 index 55febdca0fad968d946601fce4faed99bc148dd2..0000000000000000000000000000000000000000 --- a/en/application-dev/media/audio-routing-manager.md +++ /dev/null @@ -1,111 +0,0 @@ -# Audio Routing and Device Management Development - -## Overview - -The **AudioRoutingManager** module provides APIs for audio routing and device management. You can use the APIs to obtain the current input and output audio devices, listen for connection status changes of audio devices, and activate communication devices. - -## Working Principles - -The figure below shows the common APIs provided by the **AudioRoutingManager** module. - -**Figure 1** Common APIs of AudioRoutingManager - -![en-us_image_audio_routing_manager](figures/en-us_image_audio_routing_manager.png) - -You can use these APIs to obtain the device list, subscribe to or unsubscribe from device connection status changes, activate communication devices, and obtain their activation status. For details, see [Audio Management](../reference/apis/js-apis-audio.md). - - -## How to Develop - -For details about the APIs, see [AudioRoutingManager in Audio Management](../reference/apis/js-apis-audio.md#audioroutingmanager9). - -1. Obtain an **AudioRoutingManager** instance. - - Before using an API in **AudioRoutingManager**, you must use **getRoutingManager()** to obtain an **AudioRoutingManager** instance. - - ```js - import audio from '@ohos.multimedia.audio'; - async loadAudioRoutingManager() { - var audioRoutingManager = await audio.getAudioManager().getRoutingManager(); - console.info('audioRoutingManager------create-------success.'); - } - - ``` - -2. (Optional) Obtain the device list and subscribe to device connection status changes. - - To obtain the device list (such as input, output, distributed input, and distributed output devices) or listen for connection status changes of audio devices, refer to the following code: - - ```js - import audio from '@ohos.multimedia.audio'; - // Obtain an AudioRoutingManager instance. - async loadAudioRoutingManager() { - var audioRoutingManager = await audio.getAudioManager().getRoutingManager(); - console.info('audioRoutingManager------create-------success.'); - } - // Obtain information about all audio devices. (You can set DeviceFlag as required.) - async getDevices() { - await loadAudioRoutingManager(); - await audioRoutingManager.getDevices(audio.DeviceFlag.ALL_DEVICES_FLAG).then((data) => { - console.info(`getDevices success and data is: ${JSON.stringify(data)}.`); - }); - } - // Subscribe to connection status changes of audio devices. - async onDeviceChange() { - await loadAudioRoutingManager(); - await audioRoutingManager.on('deviceChange', audio.DeviceFlag.ALL_DEVICES_FLAG, (deviceChanged) => { - console.info('on device change type : ' + deviceChanged.type); - console.info('on device descriptor size : ' + deviceChanged.deviceDescriptors.length); - console.info('on device change descriptor : ' + deviceChanged.deviceDescriptors[0].deviceRole); - console.info('on device change descriptor : ' + deviceChanged.deviceDescriptors[0].deviceType); - }); - } - // Unsubscribe from the connection status changes of audio devices. - async offDeviceChange() { - await loadAudioRoutingManager(); - await audioRoutingManager.off('deviceChange', (deviceChanged) => { - console.info('off device change type : ' + deviceChanged.type); - console.info('off device descriptor size : ' + deviceChanged.deviceDescriptors.length); - console.info('off device change descriptor : ' + deviceChanged.deviceDescriptors[0].deviceRole); - console.info('off device change descriptor : ' + deviceChanged.deviceDescriptors[0].deviceType); - }); - } - // Complete process: Call APIs to obtain all devices and subscribe to device changes, then manually change the connection status of a device (for example, wired headset), and finally call APIs to obtain all devices and unsubscribe from the device changes. - async test(){ - await getDevices(); - await onDeviceChange()(); - // Manually disconnect or connect devices. - await getDevices(); - await offDeviceChange(); - } - ``` - -3. (Optional) Activate a communication device and obtain its activation status. - - ```js - import audio from '@ohos.multimedia.audio'; - // Obtain an AudioRoutingManager instance. - async loadAudioRoutingManager() { - var audioRoutingManager = await audio.getAudioManager().getRoutingManager(); - console.info('audioRoutingManager------create-------success.'); - } - // Activate a communication device. - async setCommunicationDevice() { - await loadAudioRoutingManager(); - await audioRoutingManager.setCommunicationDevice(audio.CommunicationDeviceType.SPEAKER, true).then(() => { - console.info('setCommunicationDevice true is success.'); - }); - } - // Obtain the activation status of the communication device. - async isCommunicationDeviceActive() { - await loadAudioRoutingManager(); - await audioRoutingManager.isCommunicationDeviceActive(audio.CommunicationDeviceType.SPEAKER).then((value) => { - console.info(`CommunicationDevice state is: ${value}.`); - }); - } - // Complete process: Activate a device and obtain the activation status. - async test(){ - await setCommunicationDevice(); - await isCommunicationDeviceActive(); - } - ``` diff --git a/en/application-dev/media/audio-stream-manager.md b/en/application-dev/media/audio-stream-manager.md deleted file mode 100644 index 44ec37cd11f3666131214e5e908a1ce761fea111..0000000000000000000000000000000000000000 --- a/en/application-dev/media/audio-stream-manager.md +++ /dev/null @@ -1,164 +0,0 @@ -# Audio Stream Management Development - -## Introduction - -You can use **AudioStreamManager** to manage audio streams. - -## Working Principles - -The following figure shows the calling relationship of **AudioStreamManager** APIs. - -**Figure 1** AudioStreamManager API calling relationship - -![en-us_image_audio_stream_manager](figures/en-us_image_audio_stream_manager.png) - -**NOTE**: During application development, use **getStreamManager()** to create an **AudioStreamManager** instance. Then, you can call **on('audioRendererChange')** or **on('audioCapturerChange')** to listen for status, client, and audio attribute changes of the audio playback or recording application. To cancel the listening for these changes, call **off('audioRendererChange')** or **off('audioCapturerChange')**. You can call **getCurrentAudioRendererInfoArray()** to obtain information about the audio playback application, such as the unique audio stream ID, UID of the audio playback client, and audio status. Similarly, you can call **getCurrentAudioCapturerInfoArray()** to obtain information about the audio recording application. - -## How to Develop - -For details about the APIs, see [AudioStreamManager](../reference/apis/js-apis-audio.md#audiostreammanager9). - -1. Create an **AudioStreamManager** instance. - - Before using **AudioStreamManager** APIs, you must use **getStreamManager()** to create an **AudioStreamManager** instance. - - ```js - var audioManager = audio.getAudioManager(); - var audioStreamManager = audioManager.getStreamManager(); - ``` - -2. (Optional) Call **on('audioRendererChange')** to listen for audio renderer changes. - - If an application needs to receive notifications when the audio playback application status, audio playback client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md). - - ```js - audioStreamManager.on('audioRendererChange', (AudioRendererChangeInfoArray) => { - for (let i = 0; i < AudioRendererChangeInfoArray.length; i++) { - AudioRendererChangeInfo = AudioRendererChangeInfoArray[i]; - console.info('## RendererChange on is called for ' + i + ' ##'); - console.info('StreamId for ' + i + ' is:' + AudioRendererChangeInfo.streamId); - console.info('ClientUid for ' + i + ' is:' + AudioRendererChangeInfo.clientUid); - console.info('Content for ' + i + ' is:' + AudioRendererChangeInfo.rendererInfo.content); - console.info('Stream for ' + i + ' is:' + AudioRendererChangeInfo.rendererInfo.usage); - console.info('Flag ' + i + ' is:' + AudioRendererChangeInfo.rendererInfo.rendererFlags); - console.info('State for ' + i + ' is:' + AudioRendererChangeInfo.rendererState); - var devDescriptor = AudioRendererChangeInfo.deviceDescriptors; - for (let j = 0; j < AudioRendererChangeInfo.deviceDescriptors.length; j++) { - console.info('Id:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].id); - console.info('Type:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].deviceType); - console.info('Role:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].deviceRole); - console.info('Name:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].name); - console.info('Address:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].address); - console.info('SampleRates:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].sampleRates[0]); - console.info('ChannelCounts' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].channelCounts[0]); - console.info('ChannelMask:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].channelMasks); - } - } - }); - ``` - -3. (Optional) Call **off('audioRendererChange')** to cancel listening for audio renderer changes. - - ```js - audioStreamManager.off('audioRendererChange'); - console.info('######### RendererChange Off is called #########'); - ``` - -4. (Optional) Call **on('audioCapturerChange')** to listen for audio capturer changes. - - If an application needs to receive notifications when the audio recording application status, audio recording client, or audio attribute changes, it can subscribe to this event. For more events that can be subscribed to, see [Audio Management](../reference/apis/js-apis-audio.md). - - ```js - audioStreamManager.on('audioCapturerChange', (AudioCapturerChangeInfoArray) => { - for (let i = 0; i < AudioCapturerChangeInfoArray.length; i++) { - console.info(' ## audioCapturerChange on is called for element ' + i + ' ##'); - console.info('StreamId for ' + i + 'is:' + AudioCapturerChangeInfoArray[i].streamId); - console.info('ClientUid for ' + i + 'is:' + AudioCapturerChangeInfoArray[i].clientUid); - console.info('Source for ' + i + 'is:' + AudioCapturerChangeInfoArray[i].capturerInfo.source); - console.info('Flag ' + i + 'is:' + AudioCapturerChangeInfoArray[i].capturerInfo.capturerFlags); - console.info('State for ' + i + 'is:' + AudioCapturerChangeInfoArray[i].capturerState); - for (let j = 0; j < AudioCapturerChangeInfoArray[i].deviceDescriptors.length; j++) { - console.info('Id:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].id); - console.info('Type:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].deviceType); - console.info('Role:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].deviceRole); - console.info('Name:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].name); - console.info('Address:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].address); - console.info('SampleRates:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].sampleRates[0]); - console.info('ChannelCounts' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].channelCounts[0]); - console.info('ChannelMask:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].channelMasks); - } - } - }); - ``` - -5. (Optional) Call **off('audioCapturerChange')** to cancel listening for audio capturer changes. - - ```js - audioStreamManager.off('audioCapturerChange'); - console.info('######### CapturerChange Off is called #########'); - ``` - -6. (Optional) Call **getCurrentAudioRendererInfoArray()** to obtain information about the current audio renderer. - - This API can be used to obtain the unique ID of the audio stream, UID of the audio playback client, audio status, and other information about the audio player. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly. - - ```js - await audioStreamManager.getCurrentAudioRendererInfoArray().then( function (AudioRendererChangeInfoArray) { - console.info('######### Get Promise is called ##########'); - if (AudioRendererChangeInfoArray != null) { - for (let i = 0; i < AudioRendererChangeInfoArray.length; i++) { - AudioRendererChangeInfo = AudioRendererChangeInfoArray[i]; - console.info('StreamId for ' + i +' is:' + AudioRendererChangeInfo.streamId); - console.info('ClientUid for ' + i + ' is:' + AudioRendererChangeInfo.clientUid); - console.info('Content ' + i + ' is:' + AudioRendererChangeInfo.rendererInfo.content); - console.info('Stream' + i +' is:' + AudioRendererChangeInfo.rendererInfo.usage); - console.info('Flag' + i + ' is:' + AudioRendererChangeInfo.rendererInfo.rendererFlags); - console.info('State for ' + i + ' is:' + AudioRendererChangeInfo.rendererState); - var devDescriptor = AudioRendererChangeInfo.deviceDescriptors; - for (let j = 0; j < AudioRendererChangeInfo.deviceDescriptors.length; j++) { - console.info('Id:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].id); - console.info('Type:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].deviceType); - console.info('Role:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].deviceRole); - console.info('Name:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].name); - console.info('Address:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].address); - console.info('SampleRates:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].sampleRates[0]); - console.info('ChannelCounts' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].channelCounts[0]); - console.info('ChannelMask:' + i + ':' + AudioRendererChangeInfo.deviceDescriptors[j].channelMasks); - } - } - } - }).catch((err) => { - console.log('getCurrentAudioRendererInfoArray :ERROR: ' + err.message); - }); - ``` - -7. (Optional) Call **getCurrentAudioCapturerInfoArray()** to obtain information about the current audio capturer. - This API can be used to obtain the unique ID of the audio stream, UID of the audio recording client, audio status, and other information about the audio capturer. Before calling this API, a third-party application must have the **ohos.permission.USE_BLUETOOTH** permission configured, for the device name and device address to be displayed correctly. - - ```js - await audioStreamManager.getCurrentAudioCapturerInfoArray().then( function (AudioCapturerChangeInfoArray) { - console.info('getCurrentAudioCapturerInfoArray: **** Get Promise Called ****'); - if (AudioCapturerChangeInfoArray != null) { - for (let i = 0; i < AudioCapturerChangeInfoArray.length; i++) { - console.info('StreamId for ' + i + 'is:' + AudioCapturerChangeInfoArray[i].streamId); - console.info('ClientUid for ' + i + 'is:' + AudioCapturerChangeInfoArray[i].clientUid); - console.info('Source for ' + i + 'is:' + AudioCapturerChangeInfoArray[i].capturerInfo.source); - console.info('Flag ' + i + 'is:' + AudioCapturerChangeInfoArray[i].capturerInfo.capturerFlags); - console.info('State for ' + i + 'is:' + AudioCapturerChangeInfoArray[i].capturerState); - var devDescriptor = AudioCapturerChangeInfoArray[i].deviceDescriptors; - for (let j = 0; j < AudioCapturerChangeInfoArray[i].deviceDescriptors.length; j++) { - console.info('Id:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].id); - console.info('Type:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].deviceType); - console.info('Role:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].deviceRole); - console.info('Name:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].name) - console.info('Address:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].address); - console.info('SampleRates:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].sampleRates[0]); - console.info('ChannelCounts' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].channelCounts[0]); - console.info('ChannelMask:' + i + ':' + AudioCapturerChangeInfoArray[i].deviceDescriptors[j].channelMasks); - } - } - } - }).catch((err) => { - console.log('getCurrentAudioCapturerInfoArray :ERROR: ' + err.message); - }); - ``` diff --git a/en/application-dev/media/audio-volume-manager.md b/en/application-dev/media/audio-volume-manager.md deleted file mode 100644 index 2063e831f886ae3e6e1fe0a5bd428da194d00227..0000000000000000000000000000000000000000 --- a/en/application-dev/media/audio-volume-manager.md +++ /dev/null @@ -1,126 +0,0 @@ -# Volume Management Development - -## Overview - -The **AudioVolumeManager** module provides APIs for volume management. You can use the APIs to obtain the volume of a stream, listen for ringer mode changes, and mute a microphone. - -## Working Principles - -The figure below shows the common APIs provided by the **AudioVolumeManager** module. - -**Figure 1** Common APIs of AudioVolumeManager - -![en-us_image_audio_volume_manager](figures/en-us_image_audio_volume_manager.png) - -**AudioVolumeManager** provides the APIs for subscribing to system volume changes and obtaining the audio volume group manager (an **AudioVolumeGroupManager** instance). Before calling any API in **AudioVolumeGroupManager**, you must call **getVolumeGroupManager** to obtain an **AudioVolumeGroupManager** instance. You can use the APIs provided by **AudioVolumeGroupManager** to obtain the volume of a stream, mute a microphone, and listen for microphone state changes. For details, see [Audio Management](../reference/apis/js-apis-audio.md). - -## Constraints - -Before developing a microphone management application, configure the permission **ohos.permission.MICROPHONE** for the application. To set the microphone state, configure the permission **ohos.permission.MANAGE_AUDIO_CONFIG** (a system permission). For details about the permission configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md). - -## How to Develop - -For details about the APIs, see [AudioVolumeManager in Audio Management](../reference/apis/js-apis-audio.md#audiovolumemanager9) - -1. Obtain an **AudioVolumeGroupManager** instance. - - Before using an API in **AudioVolumeGroupManager**, you must use **getVolumeGroupManager()** to obtain an **AudioStreamManager** instance. - - ```js - import audio from '@ohos.multimedia.audio'; - async loadVolumeGroupManager() { - const groupid = audio.DEFAULT_VOLUME_GROUP_ID; - var audioVolumeGroupManager = await audio.getAudioManager().getVolumeManager().getVolumeGroupManager(groupid); - console.error('audioVolumeGroupManager create success.'); - } - - ``` - -2. (Optional) Obtain the volume information and ringer mode. - - To obtain the volume information (such as the ringtone, voice call, media, and voice assistant) of an audio stream or obtain the ringer mode (silent, vibration, or normal) of the current device, refer to the code below. For more details, see [Audio Management](../reference/apis/js-apis-audio.md). - - ```js - import audio from '@ohos.multimedia.audio'; - async loadVolumeGroupManager() { - const groupid = audio.DEFAULT_VOLUME_GROUP_ID; - var audioVolumeGroupManager = await audio.getAudioManager().getVolumeManager().getVolumeGroupManager(groupid); - console.info('audioVolumeGroupManager create success.'); - } - - // Obtain the volume of a stream. The value ranges from 0 to 15. - async getVolume() { - await loadVolumeGroupManager(); - await audioVolumeGroupManager.getVolume(audio.AudioVolumeType.MEDIA).then((value) => { - console.info(`getVolume success and volume is: ${value}.`); - }); - } - // Obtain the minimum volume of a stream. - async getMinVolume() { - await loadVolumeGroupManager(); - await audioVolumeGroupManager.getMinVolume(audio.AudioVolumeType.MEDIA).then((value) => { - console.info(`getMinVolume success and volume is: ${value}.`); - }); - } - // Obtain the maximum volume of a stream. - async getMaxVolume() { - await loadVolumeGroupManager(); - await audioVolumeGroupManager.getMaxVolume(audio.AudioVolumeType.MEDIA).then((value) => { - console.info(`getMaxVolume success and volume is: ${value}.`); - }); - } - // Obtain the ringer mode in use: silent (0) | vibrate (1) | normal (2). - async getRingerMode() { - await loadVolumeGroupManager(); - await audioVolumeGroupManager.getRingerMode().then((value) => { - console.info(`getRingerMode success and RingerMode is: ${value}.`); - }); - } - ``` - -3. (Optional) Obtain and set the microphone state, and subscribe to microphone state changes. - - To obtain and set the microphone state or subscribe to microphone state changes, refer to the following code: - - ```js - import audio from '@ohos.multimedia.audio'; - async loadVolumeGroupManager() { - const groupid = audio.DEFAULT_VOLUME_GROUP_ID; - var audioVolumeGroupManager = await audio.getAudioManager().getVolumeManager().getVolumeGroupManager(groupid); - console.info('audioVolumeGroupManager create success.'); - } - - async on() { // Subscribe to microphone state changes. - await loadVolumeGroupManager(); - await audioVolumeGroupManager.audioVolumeGroupManager.on('micStateChange', (micStateChange) => { - console.info(`Current microphone status is: ${micStateChange.mute} `); - }); - } - - async isMicrophoneMute() { // Check whether the microphone is muted. - await audioVolumeGroupManager.audioVolumeGroupManager.isMicrophoneMute().then((value) => { - console.info(`isMicrophoneMute is: ${value}.`); - }); - } - - async setMicrophoneMuteTrue() { // Mute the microphone. - await loadVolumeGroupManager(); - await audioVolumeGroupManager.audioVolumeGroupManager.setMicrophoneMute(true).then(() => { - console.info('setMicrophoneMute to mute.'); - }); - } - - async setMicrophoneMuteFalse() { // Unmute the microphone. - await loadVolumeGroupManager(); - await audioVolumeGroupManager.audioVolumeGroupManager.setMicrophoneMute(false).then(() => { - console.info('setMicrophoneMute to not mute.'); - }); - } - async test(){ // Complete process: Subscribe to microphone state changes, obtain the microphone state, mute the microphone, obtain the microphone state, and unmute the microphone. - await on(); - await isMicrophoneMute(); - await setMicrophoneMuteTrue(); - await isMicrophoneMute(); - await setMicrophoneMuteFalse(); - } - ``` diff --git a/en/application-dev/media/av-overview.md b/en/application-dev/media/av-overview.md new file mode 100644 index 0000000000000000000000000000000000000000..eb0ea76dbfa90a3d3e3dd13e98ecf40876714310 --- /dev/null +++ b/en/application-dev/media/av-overview.md @@ -0,0 +1,66 @@ +# Audio and Video Overview + +You will learn how to use the audio and video APIs provided by the multimedia subsystem to develop a wealth of audio and video playback or recording scenarios. For example, you can use the **TonePlayer** class to implement simple prompt tones so that a drip sound is played upon the receipt of a new message, or use the **AVPlayer** class to develop a music player, which can loop a piece of music. + +For every functionality provided by the multimedia subsystem, you will learn multiple implementation modes, each of which corresponds to a specific usage scenario. You will also learn the sub-functionalities in these scenarios. For example, in the **Audio Playback** chapter, you will learn audio concurrency policies, volume management, and output device processing methods. All these will help you develop an application with more comprehensive features. + +This development guide applies only to audio and video playback and recording, which are implemented by the [@ohos.multimedia.audio](../reference/apis/js-apis-audio.md) and [@ohos.multimedia.media](../reference/apis/js-apis-media.md) modules. The UI, image processing, media storage, or other related capabilities are not covered. + +## Development Description + +Before developing an audio feature, especially before implementing audio data processing, you are advised to understand the following acoustic concepts. This will help you understand how the OpenHarmony APIs control the audio module and how to develop audio and video applications that are easier to use and deliver better experience. + +- Audio quantization process: sampling > quantization > encoding + +- Concepts related to audio quantization: analog signal, digital signal, sampling rate, audio channel, sample format, bit width, bit rate, common encoding formats (such as AAC, MP3, PCM, and WMA), and common encapsulation formats (such as WAV, MPA, FLAC, AAC, and OGG) + +Before developing features related to audio and video playback, you are advised to understand the following concepts: + +- Playback process: network protocol > container format > audio and video codec > graphics/audio rendering +- Network protocols: HLS, HTTP, HTTPS, and more +- Container formats: MP4, MKV, MPEG-TS, WebM, and more +- Encoding formats: H.263/H.264/H.265, MPEG4/MPEG2, and more + +## Introduction to Audio Streams + +An audio stream is an independent audio data processing unit that has a specific audio format and audio usage scenario information. The audio stream can be used in playback and recording scenarios, and supports independent volume adjustment and audio device routing. + +The basic audio stream information is defined by [AudioStreamInfo](../reference/apis/js-apis-audio.md#audiostreaminfo8), which includes the sampling, audio channel, bit width, and encoding information. It describes the basic attributes of audio data and is mandatory for creating an audio playback or recording stream. To enable the audio module to correctly process audio data, the configured basic information must match the transmitted audio data. + +### Audio Stream Usage Scenario Information + +In addition to the basic information (which describes only audio data), an audio stream has usage scenario information. This is because audio streams differ in the volume, device routing, and concurrency policy. The system chooses an appropriate processing policy for an audio stream based on the usage scenario information, thereby delivering the optimal user experience. + +- Playback scenario + +Information about the audio playback scenario is defined by using [StreamUsage](../reference/apis/js-apis-audio.md#streamusage) and [ContentType](../reference/apis/js-apis-audio.md#contenttype). + +- **StreamUsage** specifies the usage type of an audio stream, for example, used for media, voice communication, voice assistant, notification, and ringtone. + +- **ContentType** specifies the content type of data in an audio stream, for example, speech, music, movie, notification tone, and ringtone. + +- Recording scenario + +Information about the audio stream recording scenario is defined by [SourceType](../reference/apis/js-apis-audio.md#sourcetype8). + + **SourceType** specifies the recording source type of an audio stream, including the mic source, voice recognition source, and voice communication source. + +## Supported Audio Formats + +The APIs of the audio module support PCM encoding, including AudioRenderer, AudioCapturer, TonePlayer, and OpenSL ES. + +Be familiar with the following about the audio format: + +- The common audio sampling rates are supported: 8000, 11025, 12000, 16000, 22050, 24000, 32000, 44100, 48000, 64000, and 96000, in units of Hz. For details, see [AudioSamplingRate](../reference/apis/js-apis-audio.md#audiosamplingrate8). + +The sampling rate varies according to the device type. + +- Mono and stereo are supported. For details, see [AudioChannel](../reference/apis/js-apis-audio.md#audiochannel8). + +- The following sampling formats are supported: U8 (unsigned 8-bit integer), S16LE (signed 16-bit integer, little endian), S24LE (signed 24-bit integer, little endian), S32LE (signed 32-bit integer, little endian), and F32LE (signed 32-bit floating point number, little endian). For details, see [AudioSampleFormat](../reference/apis/js-apis-audio.md#audiosampleformat8). + +Due to system restrictions, only some devices support the sampling formats S24LE, S32LE, and F32LE. + + Little endian means that the most significant byte is stored at the largest memory address and the least significant byte of data is stored at the smallest. This storage mode effectively combines the memory address with the bit weight of the data. Specifically, the largest memory address has a high weight, and the smallest memory address has a low weight. + +The audio and video formats supported by the APIs of the media module are described in [AVPlayer and AVRecorder](avplayer-avrecorder-overview.md). diff --git a/en/application-dev/media/avplayer-avrecorder-overview.md b/en/application-dev/media/avplayer-avrecorder-overview.md new file mode 100644 index 0000000000000000000000000000000000000000..051ca3b66ce1839046a2e783a8c274c304625045 --- /dev/null +++ b/en/application-dev/media/avplayer-avrecorder-overview.md @@ -0,0 +1,148 @@ +# AVPlayer and AVRecorder + +The media module provides the [AVPlayer](#avplayer) and [AVRecorder](#avrecorder) class to implement audio and video playback and recording. + +## AVPlayer + +The AVPlayer transcodes audio and video media assets (such as MP4, MP3, MKV, and MPEG-TS) into renderable images and hearable audio analog signals, and plays the audio and video through output devices. + +The AVPlayer provides the integrated playback capability. This means that your application only needs to provide streaming media sources to implement media playback. It does not need to parse or decode data. + + +### Audio Playback + +The figure below shows the interaction when the **AVPlayer** class is used to develop a music application. + +**Figure 1** Interaction with external modules for audio playback + +![Audio playback interaction diagram](figures/audio-playback-interaction-diagram.png) + +When a music application calls the **AVPlayer** APIs at the JS interface layer to implement audio playback, the player framework at the framework layer parses the media asset into audio data streams (in PCM format). The audio data streams are then decoded by software and output to the audio framework. The audio framework outputs the audio data streams to the audio HDI for rendering. A complete audio playback process requires the cooperation of the application, player framework, audio framework, and audio HDI. + +In Figure 1, the numbers indicate the process where data is transferred to external modules. + +1. The music application transfers the media asset to the **AVPlayer** instance. + +2. The player framework outputs the audio PCM data streams to the audio framework, which then outputs the data streams to the audio HDI. + +### Video Playback + +The figure below shows the interaction when the **AVPlayer** class is used to develop a video application. + +**Figure 2** Interaction with external modules for video playback + +![Video playback interaction diagram](figures/video-playback-interaction-diagram.png) + +When the video application calls the **AVPlayer** APIs at the JS interface layer to implement audio and video playback, the player framework at the framework layer parses the media asset into separate audio data streams and video data streams. The audio data streams are then decoded by software and output to the audio framework. The audio framework outputs the audio data streams to the audio HDI at the hardware interface layer to implement audio playback. The video data streams are then decoded by hardware (recommended) or software and output to the graphic framework. The graphic framework outputs the video data streams to the display HDI at the hardware interface layer to implement graphics rendering. + +A complete video playback process requires the cooperation of the application, XComponent, player framework, graphic framework, audio framework, display HDI, and audio HDI. + +In Figure 2, the numbers indicate the process where data is transferred to external modules. + +1. The application obtains a window surface ID from the XComponent. For details about how to obtain the window surface ID, see [XComponent](../reference/arkui-ts/ts-basic-components-xcomponent.md). + +2. The application transfers the media asset and surface ID to the **AVPlayer** instance. + +3. The player framework outputs the video elementary streams (ESs) to the decoding HDI to obtain video frames (NV12/NV21/RGBA). + +4. The player framework outputs the audio PCM data streams to the audio framework, which then outputs the data streams to the audio HDI. + +5. The player framework outputs the video frames (NV12/NV21/RGBA) to the graphic framework, which then outputs the video frames to the display HDI. + +### Supported Formats and Protocols + +Audio and video containers and codecs are domains specific to content creators. You are advised to use the mainstream playback formats, rather than custom ones to avoid playback failures, frame freezing, and artifacts. The system will not be affected by incompatibility issues. If such an issue occurs, you can exit playback. + +The table below lists the supported protocols. + +| Scenario| Description| +| -------- | -------- | +| Local VOD| The file descriptor is supported, but the file path is not.| +| Network VoD| HTTP, HTTPS, and HLS are supported.| + +The table below lists the supported audio playback formats. + +| Audio Container Format| Description| +| -------- | -------- | +| M4A| Audio format: AAC| +| AAC| Audio format: AAC| +| MP3| Audio format: MP3| +| OGG| Audio format: VORBIS | +| WAV| Audio format: PCM | + +> **NOTE** +> +> The supported video formats are further classified into mandatory and optional ones. All vendors must support mandatory ones and can determine whether to implement optional ones based on their service requirements. You are advised to perform compatibility processing to ensure that all the application functions are compatible on different platforms. + +| Video Format| Mandatory or Not| +| -------- | -------- | +| H.264 | Yes| +| MPEG-2 | No| +| MPEG-4 | No| +| H.263 | No| +| VP8 | No| + +The table below lists the supported playback formats and mainstream resolutions. + +| Video Container Format| Description| Resolution| +| -------- | -------- | -------- | +| MP4| Video formats: H.264, MPEG-2, MPEG-4, and H.263
Audio formats: AAC and MP3| Mainstream resolutions, such as 4K, 1080p, 720p, 480p, and 270p| +| MKV| Video formats: H.264, MPEG-2, MPEG-4, and H.263
Audio formats: AAC and MP3| Mainstream resolutions, such as 4K, 1080p, 720p, 480p, and 270p| +| TS| Video formats: H.264, MPEG-2, and MPEG-4
Audio formats: AAC and MP3| Mainstream resolutions, such as 4K, 1080p, 720p, 480p, and 270p| +| WebM| Video format: VP8
Audio format: VORBIS| Mainstream resolutions, such as 4K, 1080p, 720p, 480p, and 270p| + +## AVRecorder + +The AVRecorder captures audio signals, receives video signals, encodes the audio and video signals, and saves them to files. With the AVRecorder, you can easily implement audio and video recording, including starting, pausing, resuming, and stopping recording, and releasing resources. You can also specify parameters such as the encoding format, encapsulation format, and file path for recording. + +**Figure 3** Interaction with external modules for video recording + +![Video recording interaction diagram](figures/video-recording-interaction-diagram.png) + +- Audio recording: When an application calls the **AVRecorder** APIs at the JS interface layer to implement audio recording, the player framework at the framework layer invokes the audio framework to capture audio data through the audio HDI. The audio data is then encoded by software and saved into a file. + +- Video recording: When an application calls the **AVRecorder** APIs at the JS interface layer to implement video recording, the camera framework is first invoked to capture image data. Through the video encoding HDI, the camera framework sends the data to the player framework at the framework layer. The player framework encodes the image data through the video HDI and saves the encoded image data into a file. + +With the AVRecorder, you can implement pure audio recording, pure video recording, and audio and video recording. + +In Figure 3, the numbers indicate the process where data is transferred to external modules. + +1. The application obtains a surface ID from the player framework through the **AVRecorder** instance. + +2. The application sets the surface ID for the camera framework, which obtains the surface corresponding to the surface ID. The camera framework captures image data through the video HDI and sends the data to the player framework at the framework layer. + +3. The camera framework transfers the video data to the player framework through the surface. + +4. The player framework encodes video data through the video HDI. + +5. The player framework sets the audio parameters for the audio framework and obtains the audio data from the audio framework. + +### Supported Formats + +The table below lists the supported audio sources. + +| Type| Description| +| -------- | -------- | +| mic | The system microphone is used as the audio source input.| + +The table below lists the supported video sources. + +| Type| Description | +| -------- | -------- | +| surface_yuv | The input surface carries raw data.| +| surface_es | The input surface carries ES data.| + +The table below lists the supported audio and video encoding formats. + +| Encoding Format| Description | +| -------- | -------- | +| audio/mp4a-latm | Audio encoding format MP4A-LATM.| +| video/mp4v-es | Video encoding format MPEG-4.| +| video/avc | Video encoding format AVC.| + +The table below lists the supported output file formats. + +| Format| Description | +| -------- | -------- | +| MP4| Video container format MP4.| +| M4A| Audio container format M4A.| diff --git a/en/application-dev/media/avplayer-playback.md b/en/application-dev/media/avplayer-playback.md deleted file mode 100644 index 0281519d5ba777802fc819a4db17b2ce5c49b5fd..0000000000000000000000000000000000000000 --- a/en/application-dev/media/avplayer-playback.md +++ /dev/null @@ -1,477 +0,0 @@ -# AVPlayer Development - -## Introduction - -The AVPlayer converts audio or video resources into audible analog signals or renderable images and plays the signals or images using output devices. You can manage playback tasks on the AVPlayer. For example, you can control the playback (start/pause/stop/seek), set the volume, obtain track information, and release resources. - -## Working Principles - -The following figures show the [AVPlayer state](../reference/apis/js-apis-media.md#avplayerstate9) transition and interaction with external audio and video playback modules. - -**Figure 1** AVPlayer state transition - -![en-us_image_avplayer_state_machine](figures/en-us_image_avplayer_state_machine.png) - -**Figure 2** Interaction with external modules for audio playback - -![en-us_image_avplayer_audio](figures/en-us_image_avplayer_audio.png) - -**NOTE**: When an application calls the **AVPlayer** JS APIs at the JS interface layer to implement a feature, the framework layer parses the resources into audio data streams through the playback service of the player framework. The audio data streams are then decoded by software and output to the audio service of the audio framework. The audio framework outputs the audio data streams to the audio HDI at the hardware interface layer to implement audio playback. A complete audio playback process requires the cooperation of the application (application adaptation required), player framework, audio framework, and audio HDI (driver adaptation required). - -1. An application passes a URL into the **AVPlayer** JS API. -2. The playback service outputs the audio PCM data streams to the audio service, and the audio service outputs the data streams to the audio HDI. - - -**Figure 3** Interaction with external modules for video playback - -![en-us_image_avplayer_video](figures/en-us_image_avplayer_video.png) - -**NOTE**: When an application calls the **AVPlayer** JS APIs at the JS interface layer to implement a feature, the framework layer parses the resources into separate audio data streams and video data streams through the playback service of the player framework. The audio data streams are then decoded by software and output to the audio service of the audio framework. The audio framework outputs the audio data streams to the audio HDI at the hardware interface layer to implement audio playback. The video data streams are then decoded by hardware (recommended) or software and output to the renderer service of the graphic framework. The renderer service outputs the video data streams to the display HDI at the hardware interface layer. A complete video playback process requires the cooperation of the application (application adaptation required), XComponent, player framework, graphic framework, audio framework, display HDI (driver adaptation required), and audio HDI (driver adaptation required). - -1. An application obtains the surface ID from the XComponent. For details about the obtaining method, see [XComponent](../reference/arkui-ts/ts-basic-components-xcomponent.md). -2. The application passes a URL and the surface ID into the **AVPlayer** JS API. -3. The playback service outputs video elementary streams (ESs) to the codec HDI, which decodes the ESs to obtain video frames (NV12/NV21/RGBA). -4. The playback service outputs the audio PCM data streams to the audio service, and the audio service outputs the data streams to the audio HDI. -5. The playback service outputs video frames (NV12/NV21/RGBA) to the renderer service, and the renderer service outputs the video frames to the display HDI. - -## Compatibility - -Use the mainstream playback formats and resolutions, rather than custom ones to avoid playback failures, frame freezing, and artifacts. The system will not be affected by incompatibility issues. If such an issue occurs, you can exit stream playback. - -The table below lists the mainstream playback formats and resolutions. - -| Video Container Format| Description | Resolution | -| :----------: | :-----------------------------------------------: | :--------------------------------: | -| mp4 | Video format: H.264/MPEG-2/MPEG-4/H.263; audio format: AAC/MP3| Mainstream resolutions, such as 1080p, 720p, 480p, and 270p| -| mkv | Video format: H.264/MPEG-2/MPEG-4/H.263; audio format: AAC/MP3| Mainstream resolutions, such as 1080p, 720p, 480p, and 270p| -| ts | Video format: H.264/MPEG-2/MPEG-4; audio format: AAC/MP3 | Mainstream resolutions, such as 1080p, 720p, 480p, and 270p| -| webm | Video format: VP8; audio format: VORBIS | Mainstream resolutions, such as 1080p, 720p, 480p, and 270p| - -| Audio Container Format | Description | -| :----------: | :----------: | -| m4a | Audio format: AAC| -| aac | Audio format: AAC| -| mp3 | Audio format: MP3| -| ogg | Audio format: VORBIS | -| wav | Audio format: PCM | - -## How to Develop - -For details about the APIs, see the [AVPlayer APIs in the Media Class](../reference/apis/js-apis-media.md#avplayer9). - -### Full-Process Scenario - -The full playback process includes creating an instance, setting resources, setting a video window, preparing for playback, controlling playback, and resetting or releasing the resources. (During the preparation, you can obtain track information, volume, speed, focus mode, and zoom mode, and set bit rates. To control the playback, you can start, pause, and stop the playback, seek to a playback position, and set the volume.) - -1. Call [createAVPlayer()](../reference/apis/js-apis-media.md#mediacreateavplayer9) to create an **AVPlayer** instance. The AVPlayer is initialized to the [idle](#avplayer_state) state. - -2. Set the events to listen for, which will be used in the full-process scenario. - -3. Set the resource [URL](../reference/apis/js-apis-media.md#avplayer_attributes). When the AVPlayer enters the [initialized](#avplayer_state) state, you can set the [surface ID](../reference/apis/js-apis-media.md#avplayer_attributes) for the video window. For details about the supported specifications, see [AVPlayer Attributes](../reference/apis/js-apis-media.md#avplayer_attributes). - -4. Call [prepare()](../reference/apis/js-apis-media.md#avplayer_prepare) to switch the AVPlayer to the [prepared](#avplayer_state) state. - -5. Perform video playback control. For example, you can call [play()](../reference/apis/js-apis-media.md#avplayer_play), [pause()](../reference/apis/js-apis-media.md#avplayer_pause), [seek()](../reference/apis/js-apis-media.md#avplayer_seek), and [stop()](../reference/apis/js-apis-media.md#avplayer_stop) to control the playback. - -6. Call [reset()](../reference/apis/js-apis-media.md#avplayer_reset) to reset resources. The AVPlayer enters the [idle](#avplayer_state) state again, and you can change the resource [URL](../reference/apis/js-apis-media.md#avplayer_attributes). - -7. Call [release()](../reference/apis/js-apis-media.md#avplayer_release) to release the instance. The AVPlayer enters the [released](#avplayer_state) state and exits the playback. - -> **NOTE** -> -> When the AVPlayer is in the prepared, playing, paused, or completed state, the playback engine is working and a large amount of system running memory is occupied. If your application does not need to use the AVPlayer, call **reset()** or **release()** to release the resources. - -### Listening Events - -| Event Type | Description | -| ------------------------------------------------- | ------------------------------------------------------------ | -| stateChange | Mandatory; used to listen for player state changes. | -| error | Mandatory; used to listen for player error information. | -| durationUpdate | Used to listen for progress bar updates to refresh the resource duration. | -| timeUpdate | Used to listen for the current position of the progress bar to refresh the current time. | -| seekDone | Used to listen for the completion status of the **seek()** request. | -| speedDone | Used to listen for the completion status of the **setSpeed()** request. | -| volumeChange | Used to listen for the completion status of the **setVolume()** request. | -| bitrateDone | Used to listen for the completion status of the **setBitrate()** request, which is used for HTTP Live Streaming (HLS) streams. | -| availableBitrates | Used to listen for available bit rates of HLS resources. The available bit rates are provided for **setBitrate()**. | -| bufferingUpdate | Used to listen for network playback buffer information. | -| startRenderFrame | Used to listen for the rendering time of the first frame during video playback. | -| videoSizeChange | Used to listen for the width and height of video playback and adjust the window size and ratio.| -| audioInterrupt | Used to listen for audio interruption during video playback. This event is used together with the **audioInterruptMode** attribute.| - -### Full-Process Scenario API Example - -```js -import media from '@ohos.multimedia.media' -import audio from '@ohos.multimedia.audio'; -import fs from '@ohos.file.fs' - -const TAG = 'AVPlayerDemo:' -export class AVPlayerDemo { - private count:number = 0 - private avPlayer - private surfaceID:string // The surfaceID parameter is used for screen display. Its value is obtained through the XComponent API. - - // Set AVPlayer callback functions. - setAVPlayerCallback() { - // Callback function for state changes. - this.avPlayer.on('stateChange', async (state, reason) => { - switch (state) { - case 'idle': // This state is reported upon a successful callback of reset(). - console.info(TAG + 'state idle called') - this.avPlayer.release() // Release the AVPlayer instance. - break; - case 'initialized': // This state is reported when the AVPlayer sets the playback source. - console.info(TAG + 'state initialized called ') - this.avPlayer.surfaceId = this.surfaceID // Set the image to be displayed. This setting is not required when a pure audio resource is to be played. - this.avPlayer.prepare().then(() => { - console.info(TAG+ 'prepare success'); - }, (err) => { - console.error(TAG + 'prepare filed,error message is :' + err.message) - }) - break; - case 'prepared': // This state is reported upon a successful callback of prepare(). - console.info(TAG + 'state prepared called') - this.avPlayer.play() // Call play() to start playback. - break; - case 'playing': // This state is reported upon a successful callback of play(). - console.info(TAG + 'state playing called') - if (this.count == 0) { - this.avPlayer.pause() // Call pause() to pause the playback. - } else { - this.avPlayer.seek(10000, media.SeekMode.SEEK_PREV_SYNC) // Seek to 10 seconds. The seekDone callback is triggered. - } - break; - case 'paused': // This state is reported upon a successful callback of pause(). - console.info(TAG + 'state paused called') - if (this.count == 0) { - this.count++ - this.avPlayer.play() // Call play() to continue the playback. - } - break; - case 'completed': // This state is reported upon the completion of the playback. - console.info(TAG + 'state completed called') - this.avPlayer.stop() // Call stop() to stop the playback. - break; - case 'stopped': // This state is reported upon a successful callback of stop(). - console.info(TAG + 'state stopped called') - this.avPlayer.reset() // Call reset() to initialize the AVPlayer state. - break; - case 'released': - console.info(TAG + 'state released called') - break; - case 'error': - console.info(TAG + 'state error called') - break; - default: - console.info(TAG + 'unkown state :' + state) - break; - } - }) - // Callback function for time updates. - this.avPlayer.on('timeUpdate', (time:number) => { - console.info(TAG + 'timeUpdate success,and new time is :' + time) - }) - // Callback function for volume updates. - this.avPlayer.on('volumeChange', (vol:number) => { - console.info(TAG + 'volumeChange success,and new volume is :' + vol) - this.avPlayer.setSpeed(media.AVPlayerSpeed.SPEED_FORWARD_2_00_X) // Double the playback speed. The speedDone callback is triggered. - }) - // Callback function for the video playback completion event. - this.avPlayer.on('endOfStream', () => { - console.info(TAG + 'endOfStream success') - }) - // Callback function for the seek operation. - this.avPlayer.on('seekDone', (seekDoneTime:number) => { - console.info(TAG + 'seekDone success,and seek time is:' + seekDoneTime) - this.avPlayer.setVolume(0.5) // Set the volume to 0.5. The volumeChange callback is triggered. - }) - // Callback function for the speed setting operation. - this.avPlayer.on('speedDone', (speed:number) => { - console.info(TAG + 'speedDone success,and speed value is:' + speed) - }) - // Callback function for successful bit rate setting. - this.avPlayer.on('bitrateDone', (bitrate:number) => { - console.info(TAG + 'bitrateDone success,and bitrate value is:' + bitrate) - }) - // Callback function for buffering updates. - this.avPlayer.on('bufferingUpdate', (infoType: media.BufferingInfoType, value: number) => { - console.info(TAG + 'bufferingUpdate success,and infoType value is:' + infoType + ', value is :' + value) - }) - // Callback function invoked when frame rendering starts. - this.avPlayer.on('startRenderFrame', () => { - console.info(TAG + 'startRenderFrame success') - }) - // Callback function for video width and height changes. - this.avPlayer.on('videoSizeChange', (width: number, height: number) => { - console.info(TAG + 'videoSizeChange success,and width is:' + width + ', height is :' + height) - }) - // Callback function for the audio interruption event. - this.avPlayer.on('audioInterrupt', (info: audio.InterruptEvent) => { - console.info(TAG + 'audioInterrupt success,and InterruptEvent info is:' + info) - }) - // Callback function to report the available bit rates of HLS. - this.avPlayer.on('availableBitrates', (bitrates: Array) => { - console.info(TAG + 'availableBitrates success,and availableBitrates length is:' + bitrates.length) - }) - } - - async avPlayerDemo() { - // Create an AVPlayer instance. - this.avPlayer = await media.createAVPlayer() - let fdPath = 'fd://' - let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements. - // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_AAC.mp4 /data/app/el2/100/base/ohos.acts.multimedia.media.avplayer/haps/entry/files" command. - let path = pathDir + '/H264_AAC.mp4' - let file = await fs.open(path) - fdPath = fdPath + '' + file.fd - this.avPlayer.url = fdPath - } -} -``` - -### Normal Playback Scenario - -```js -import media from '@ohos.multimedia.media' -import fs from '@ohos.file.fs' - -const TAG = 'AVPlayerDemo:' -export class AVPlayerDemo { - private avPlayer - private surfaceID:string // The surfaceID parameter is used for screen display. Its value is obtained through the XComponent API. - - // Set AVPlayer callback functions. - setAVPlayerCallback() { - // Callback function for state changes. - this.avPlayer.on('stateChange', async (state, reason) => { - switch (state) { - case 'idle': // This state is reported upon a successful callback of reset(). - console.info(TAG + 'state idle called') - break; - case 'initialized': // This state is reported when the AVPlayer sets the playback source. - console.info(TAG + 'state initialized called ') - this.avPlayer.surfaceId = this.surfaceID // Set the image to be displayed. This setting is not required when a pure audio resource is to be played. - this.avPlayer.prepare().then(() => { - console.info(TAG+ 'prepare success'); - }, (err) => { - console.error(TAG + 'prepare filed,error message is :' + err.message) - }) - break; - case 'prepared': // This state is reported upon a successful callback of prepare(). - console.info(TAG + 'state prepared called') - this.avPlayer.play() // Call play() to start playback. - break; - case 'playing': // This state is reported upon a successful callback of play(). - console.info(TAG + 'state playing called') - break; - case 'paused': // This state is reported upon a successful callback of pause(). - console.info(TAG + 'state paused called') - break; - case 'completed': // This state is reported upon the completion of the playback. - console.info(TAG + 'state completed called') - this.avPlayer.stop() // Call stop() to stop the playback. - break; - case 'stopped': // This state is reported upon a successful callback of stop(). - console.info(TAG + 'state stopped called') - this.avPlayer.release() // Call reset() to initialize the AVPlayer state. - break; - case 'released': - console.info(TAG + 'state released called') - break; - case 'error': - console.info(TAG + 'state error called') - break; - default: - console.info(TAG + 'unkown state :' + state) - break; - } - }) - } - - async avPlayerDemo() { - // Create an AVPlayer instance. - this.avPlayer = await media.createAVPlayer() - let fileDescriptor = undefined - // Use getRawFileDescriptor of the resource management module to obtain the media assets in the application, and use the fdSrc attribute of the AVPlayer to initialize the media asset. - // For details on the fd/offset/length parameter, see the Media API. The globalThis.abilityContext parameter is a system environment variable and is saved as a global variable on the main page during the system boost. - await globalThis.abilityContext.resourceManager.getRawFileDescriptor('H264_AAC.mp4').then((value) => { - fileDescriptor = {fd: value.fd, offset: value.offset, length: value.length} - }) - this.avPlayer.fdSrc = fileDescriptor - } -} -``` - -### Looping a Song - -```js -import media from '@ohos.multimedia.media' -import fs from '@ohos.file.fs' - -const TAG = 'AVPlayerDemo:' -export class AVPlayerDemo { - private count:number = 0 - private avPlayer - private surfaceID:string // The surfaceID parameter is used for screen display. Its value is obtained through the XComponent API. - - // Set AVPlayer callback functions. - setAVPlayerCallback() { - // Callback function for state changes. - this.avPlayer.on('stateChange', async (state, reason) => { - switch (state) { - case 'idle': // This state is reported upon a successful callback of reset(). - console.info(TAG + 'state idle called') - break; - case 'initialized': // This state is reported when the AVPlayer sets the playback source. - console.info(TAG + 'state initialized called ') - this.avPlayer.surfaceId = this.surfaceID // Set the image to be displayed. This setting is not required when a pure audio resource is to be played. - this.avPlayer.prepare().then(() => { - console.info(TAG+ 'prepare success'); - }, (err) => { - console.error(TAG + 'prepare filed,error message is :' + err.message) - }) - break; - case 'prepared': // This state is reported upon a successful callback of prepare(). - console.info(TAG + 'state prepared called') - this.avPlayer.loop = true // Set the AVPlayer to loop a single item. The endOfStream callback is triggered when the previous round of the playback is complete. - this.avPlayer.play() // Call play() to start playback. - break; - case 'playing': // This state is reported upon a successful callback of play(). - console.info(TAG + 'state playing called') - break; - case 'paused': // This state is reported upon a successful callback of pause(). - console.info(TAG + 'state paused called') - break; - case 'completed': // This state is reported upon the completion of the playback. - console.info(TAG + 'state completed called') - // Cancel the loop playback when the endOfStream callback is triggered for the second time. The completed state is reported when the next round of the playback is complete. - this.avPlayer.stop() // Call stop() to stop the playback. - break; - case 'stopped': // This state is reported upon a successful callback of stop(). - console.info(TAG + 'state stopped called') - this.avPlayer.release() // Call reset() to initialize the AVPlayer state. - break; - case 'released': - console.info(TAG + 'state released called') - break; - case 'error': - console.info(TAG + 'state error called') - break; - default: - console.info(TAG + 'unkown state :' + state) - break; - } - }) - // Callback function for the video playback completion event. - this.avPlayer.on('endOfStream', () => { - console.info(TAG + 'endOfStream success') - if (this.count == 1) { - this.avPlayer.loop = false // Cancel loop playback. - } else { - this.count++ - } - }) - } - - async avPlayerDemo() { - // Create an AVPlayer instance. - this.avPlayer = await media.createAVPlayer() - let fdPath = 'fd://' - let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements. - // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_AAC.mp4 /data/app/el2/100/base/ohos.acts.multimedia.media.avplayer/haps/entry/files" command. - let path = pathDir + '/H264_AAC.mp4' - let file = await fs.open(path) - fdPath = fdPath + '' + file.fd - this.avPlayer.url = fdPath - } -} -``` -### Switching to the Next Video Clip - -```js -import media from '@ohos.multimedia.media' -import fs from '@ohos.file.fs' - -const TAG = 'AVPlayerDemo:' -export class AVPlayerDemo { - private count:number = 0 - private avPlayer - private surfaceID:string // The surfaceID parameter is used for screen display. Its value is obtained through the XComponent API. - - async nextVideo() { - let fdPath = 'fd://' - let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements. - // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_MP3.mp4 /data/app/el2/100/base/ohos.acts.multimedia.media.avplayer/haps/entry/files" command. - let path = pathDir + '/H264_MP3.mp4' - let file = await fs.open(path) - fdPath = fdPath + '' + file.fd - this.avPlayer.url = fdPath // The initialized state is reported again. - } - - // Set AVPlayer callback functions. - setAVPlayerCallback() { - // Callback function for state changes. - this.avPlayer.on('stateChange', async (state, reason) => { - switch (state) { - case 'idle': // This state is reported upon a successful callback of reset(). - console.info(TAG + 'state idle called') - await this.nextVideo() // Switch to the next video. - break; - case 'initialized': // This state is reported when the AVPlayer sets the playback source. - console.info(TAG + 'state initialized called ') - this.avPlayer.surfaceId = this.surfaceID // Set the image to be displayed. This setting is not required when a pure audio resource is to be played. - this.avPlayer.prepare().then(() => { - console.info(TAG+ 'prepare success'); - }, (err) => { - console.error(TAG + 'prepare filed,error message is :' + err.message) - }) - break; - case 'prepared': // This state is reported upon a successful callback of prepare(). - console.info(TAG + 'state prepared called') - this.avPlayer.play() // Call play() to start playback. - break; - case 'playing': // This state is reported upon a successful callback of play(). - console.info(TAG + 'state playing called') - break; - case 'paused': // This state is reported upon a successful callback of pause(). - console.info(TAG + 'state paused called') - break; - case 'completed': // This state is reported upon the completion of the playback. - console.info(TAG + 'state completed called') - if (this.count == 0) { - this.count++ - this.avPlayer.reset() // Call reset() to prepare for switching to the next video. - } else { - this.avPlayer.release() // Release the AVPlayer instance when the new video finishes playing. - } - break; - case 'stopped': // This state is reported upon a successful callback of stop(). - console.info(TAG + 'state stopped called') - break; - case 'released': - console.info(TAG + 'state released called') - break; - case 'error': - console.info(TAG + 'state error called') - break; - default: - console.info(TAG + 'unkown state :' + state) - break; - } - }) - } - - async avPlayerDemo() { - // Create an AVPlayer instance. - this.avPlayer = await media.createAVPlayer() - let fdPath = 'fd://' - let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements. - // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_AAC.mp4 /data/app/el2/100/base/ohos.acts.multimedia.media.avplayer/haps/entry/files" command. - let path = pathDir + '/H264_AAC.mp4' - let file = await fs.open(path) - fdPath = fdPath + '' + file.fd - this.avPlayer.url = fdPath - } -} -``` diff --git a/en/application-dev/media/avrecorder.md b/en/application-dev/media/avrecorder.md deleted file mode 100644 index fa8238bc815c7a6d0f4f7ad9f1d8e509563e1f50..0000000000000000000000000000000000000000 --- a/en/application-dev/media/avrecorder.md +++ /dev/null @@ -1,482 +0,0 @@ -# AVRecorder Development - -## Introduction - -The AVRecorder captures audio signals, receives video signals, encodes audio and video signals, and saves them to files. With the AVRecorder, you can easily implement audio and video recording, including starting, pausing, resuming, and stopping recording, and releasing resources. You can also specify parameters such as the encoding format, encapsulation format, and file path for recording. - -## Working Principles - -The following figures show the AVRecorder state transition and the interaction with external modules for audio and video recording. - -**Figure 1** AVRecorder state transition - -![en-us_image_video_recorder_state_machine](figures/en-us_image_avrecorder_state_machine.png) - -**Figure 2** Interaction between external modules for audio and video recording - -![en-us_image_video_recorder_zero](figures/en-us_image_avrecorder_module_interaction.png) - -**NOTE**: During audio recording, the framework layer calls the audio subsystem through the media service of the native framework to capture audio data through the audio HDI, encodes and encapsulates the data by using software, and saves the data to a file. During video recording, the camera subsystem captures image data through the video HDI. The media service encodes the image data through the video encoding HDI and encapsulates the encoded image data into a file. With the AVRecorder, you can implement pure audio recording, pure video recording, and audio and video recording. - -## Constraints - -Before developing the recording feature, configure permissions for your application. If audio recording is involved, obtain the permission **ohos.permission.MICROPHONE** by following the instructions provided in [Permission Application Guide](../security/accesstoken-guidelines.md). - -To use the camera to record videos, the camera module is required. For details about how to use the APIs and obtain permissions, see [Camera Management](../reference/apis/js-apis-camera.md). - -## How to Develop - -For details about the AVRecorder APIs, see the [AVRecorder APIs in the Media Class](../reference/apis/js-apis-media.md#avrecorder9). - -For details about the processes related to the media library, see [Media Library Management](../reference/apis/js-apis-medialibrary.md). - -For details about the camera-related process, see [Camera Management](../reference/apis/js-apis-camera.md). - -### Full-Process Scenario of Audio and Video Recording - -The full audio and video recording process includes creating an instance, setting recording parameters, obtaining the input surface, starting, pausing, resuming, and stopping recording, and releasing resources. - -The value range that can be set for the audio recording parameters is restricted by the codec performance of the device and the performance of the audio subsystem. - -The video range that can be set for the video recording parameters is restricted by the codec performance of the device and the performance of the camera subsystem. - -``` -import media from '@ohos.multimedia.media' -import camera from '@ohos.multimedia.camera' -import mediaLibrary from '@ohos.multimedia.mediaLibrary' - -export class AVRecorderDemo { - private testFdNumber; // Used to save the File Descriptor (FD) address. - - // Obtain the FD corresponding to fileName of the recorded file. The media library capability is required. To use the media library, configure the following permissions: ohos.permission.MEDIA_LOCATION, ohos.permission.WRITE_MEDIA, and ohos.permission.READ_MEDIA. - async getFd(fileName) { - // For details about the implementation mode, see the media library documentation. - this.testFdNumber = "fd://" + fdNumber.toString(); // e.g. fd://54 - } - - // Error callback triggered in the case of an error in the promise mode. - failureCallback(error) { - console.info('error happened, error message is ' + error.message); - } - - // Error callback triggered in the case of an exception in the promise mode. - catchCallback(error) { - console.info('catch error happened, error message is ' + error.message); - } - - async AVRecorderDemo() { - let AVRecorder; // Assign a value to the empty AVRecorder instance upon a successful call of createAVRecorder(). - let surfaceID; // The surface ID is obtained by calling getInputSurface and transferred to the videoOutput object of the camera. - await this.getFd('01.mp4'); - - // Configure the parameters related to audio and video recording based on those supported by the hardware device. - let avProfile = { - audioBitrate : 48000, - audioChannels : 2, - audioCodec : media.CodecMimeType.AUDIO_AAC, - audioSampleRate : 48000, - fileFormat : media.ContainerFormatType.CFT_MPEG_4, - videoBitrate : 2000000, - videoCodec : media.CodecMimeType.VIDEO_MPEG4, - videoFrameWidth : 640, - videoFrameHeight : 480, - videoFrameRate : 30 - } - let avConfig = { - audioSourceType : media.AudioSourceType.AUDIO_SOURCE_TYPE_MIC, - videoSourceType : media.VideoSourceType.VIDEO_SOURCE_TYPE_SURFACE_YUV, - profile : avProfile, - url : 'fd://', - rotation : 0, - location : { latitude : 30, longitude : 130 } - } - - // Create an AVRecorder instance. - await media.createAVRecorder().then((recorder) => { - console.info('case createAVRecorder called'); - if (typeof (recorder) != 'undefined') { - AVRecorder = recorder; - console.info('createAVRecorder success'); - } else { - console.info('createAVRecorder failed'); - } - }, this.failureCallback).catch(this.catchCallback); - - // After the instance is created, use the on('stateChange') and on('error') callbacks to listen for state changes and errors. - AVRecorder.on('stateChange', async (state, reason) => { - console.info('case state has changed, new state is :' + state); - switch (state) { - // Your can set the desired behavior in different states as required. - case 'idle': - // This state is reported upon a successful call of rest() or create(). - break; - case 'prepared': - // This state is reported upon a successful call of prepare(). - break; - case 'started': - // This state is reported upon a successful call of start(). - break; - case 'paused': - // This state is reported upon a successful call of pause(). - break; - case 'stopped': - // This state is reported upon a successful call of stop(). - break; - case 'released': - // This state is reported upon a successful call of release(). - break; - case 'error': - // The error state indicates that an error occurs at the bottom layer. You must rectify the fault and create an AVRecorder instance again. - break; - default: - console.info('case state is unknown'); - } - }); - AVRecorder.on('error', (err) => { - // Listen for non-interface errors. - console.info('case avRecorder.on(error) called, errMessage is ' + err.message); - }); - - // Call prepare() to prepare for recording. The bottom layer determines whether to record audio, video, or audio and video based on the input parameters of prepare(). - await AVRecorder.prepare(avConfig).then(() => { - console.info('prepare success'); - }, this.failureCallback).catch(this.catchCallback); - - // If video recording is involved, call getInputSurface to obtain the input surface and pass the returned surface ID to the related camera API. - await AVRecorder.getInputSurface().then((surface) => { - console.info('getInputSurface success'); - surfaceID = surface; // The surfaceID is passed into createVideoOutput() of the camera as an input parameter. - }, this.failureCallback).catch(this.catchCallback); - - // Video recording depends on camera-related APIs. The following operations can be performed only after the video output start API is invoked. - // Start video recording. - await AVRecorder.start().then(() => { - console.info('start success'); - }, this.failureCallback).catch(this.catchCallback); - - // Pause video recording before the video output stop API of the camera is invoked. - await AVRecorder.pause().then(() => { - console.info('pause success'); - }, this.failureCallback).catch(this.catchCallback); - - // Resume video recording after the video output start API of the camera is invoked. - await AVRecorder.resume().then(() => { - console.info('resume success'); - }, this.failureCallback).catch(this.catchCallback); - - // Stop video recording after the video output stop API of the camera is invoked. - await AVRecorder.stop().then(() => { - console.info('stop success'); - }, this.failureCallback).catch(this.catchCallback); - - // Reset the recording configuration. - await AVRecorder.reset().then(() => { - console.info('reset success'); - }, this.failureCallback).catch(this.catchCallback); - - // Disable the listeners. The configured callbacks will be invalid after release() is invoked, even if you do not call off(). - AVRecorder.off('stateChange'); - AVRecorder.off('error'); - - // Release the video recording resources and camera object resources. - await AVRecorder.release().then(() => { - console.info('release success'); - }, this.failureCallback).catch(this.catchCallback); - - // Set the AVRecorder instance to null. - AVRecorder = undefined; - surfaceID = undefined; - } -} -``` - -### Full-Process Scenario of Pure Audio Recording - -The full audio recording process includes creating an instance, setting recording parameters, starting, pausing, resuming, and stopping recording, and releasing resources. - -The value range that can be set for the audio recording parameters is restricted by the codec performance of the device and the performance of the audio subsystem. - -``` -import media from '@ohos.multimedia.media' -import mediaLibrary from '@ohos.multimedia.mediaLibrary' - -export class AudioRecorderDemo { - private testFdNumber; // Used to save the FD address. - - // Obtain the FD corresponding to fileName of the recorded file. The media library capability is required. To use the media library, configure the following permissions: ohos.permission.MEDIA_LOCATION, ohos.permission.WRITE_MEDIA, and ohos.permission.READ_MEDIA. - async getFd(fileName) { - // For details about the implementation mode, see the media library documentation. - this.testFdNumber = "fd://" + fdNumber.toString(); // e.g. fd://54 - } - - // Error callback triggered in the case of an error in the promise mode. - failureCallback(error) { - console.info('error happened, error message is ' + error.message); - } - - // Error callback triggered in the case of an exception in the promise mode. - catchCallback(error) { - console.info('catch error happened, error message is ' + error.message); - } - - async audioRecorderDemo() { - let audioRecorder; // Assign a value to the empty AudioRecorder instance upon a successful call of createAVRecorder(). - await this.getFd('01.m4a'); - // Configure the parameters related to audio recording. - let audioProfile = { - audioBitrate : 48000, - audioChannels : 2, - audioCodec : media.CodecMimeType.AUDIO_AAC, - audioSampleRate : 48000, - fileFormat : media.ContainerFormatType.CFT_MPEG_4, - } - let audioConfig = { - audioSourceType : media.AudioSourceType.AUDIO_SOURCE_TYPE_MIC, - profile : audioProfile, - url : this.testFdNumber, - rotation : 0, - location : { latitude : 30, longitude : 130 } - } - - // Create an AudioRecorder instance. - await media.createAVRecorder().then((recorder) => { - console.info('case createAVRecorder called'); - if (typeof (recorder) != 'undefined') { - audioRecorder = recorder; - console.info('createAudioRecorder success'); - } else { - console.info('createAudioRecorder failed'); - } - }, this.failureCallback).catch(this.catchCallback); - - // After the instance is created, use the on('stateChange') and on('error') callbacks to listen for state changes and errors. - audioRecorder.on('stateChange', async (state, reason) => { - console.info('case state has changed, new state is :' + state); - switch (state) { - // Your can set the desired behavior in different states as required. - case 'idle': - // This state is reported upon a successful call of rest() or create(). - break; - case 'prepared': - // This state is reported upon a successful call of prepare(). - break; - case 'started': - // This state is reported upon a successful call of start(). - break; - case 'paused': - // This state is reported upon a successful call of pause(). - break; - case 'stopped': - // This state is reported upon a successful call of stop(). - break; - case 'released': - // This state is reported upon a successful call of release(). - break; - case 'error': - // The error state indicates that an error occurs at the bottom layer. You must rectify the fault and create an AudioRecorder instance again. - break; - default: - console.info('case state is unknown'); - } - }); - audioRecorder.on('error', (err) => { - // Listen for non-interface errors. - console.info('case avRecorder.on(error) called, errMessage is ' + err.message); - }); - - // Call prepare() to prepare for recording. The bottom layer determines whether to record audio, video, or audio and video based on the input parameters of prepare(). - await audioRecorder.prepare(audioConfig).then(() => { - console.info('prepare success'); - }, this.failureCallback).catch(this.catchCallback); - - // Call start() to start audio recording. - await audioRecorder.start().then(() => { - console.info('start success'); - }, this.failureCallback).catch(this.catchCallback); - - // Call pause() to pause audio recording. - await audioRecorder.pause().then(() => { - console.info('pause success'); - }, this.failureCallback).catch(this.catchCallback); - - // Call resume() to resume audio recording. - await audioRecorder.resume().then(() => { - console.info('resume success'); - }, this.failureCallback).catch(this.catchCallback); - - // Call stop() to stop audio recording. - await audioRecorder.stop().then(() => { - console.info('stop success'); - }, this.failureCallback).catch(this.catchCallback); - - // Call reset() to reset the recording configuration. - await audioRecorder.reset().then(() => { - console.info('reset success'); - }, this.failureCallback).catch(this.catchCallback); - - // Disable the listeners. The configured callbacks will be invalid after release() is invoked, even if you do not call off(). - avRecorder.off('stateChange'); - avRecorder.off('error'); - - // Call release() to release audio recording resources. - await audioRecorder.release().then(() => { - console.info('release success'); - }, this.failureCallback).catch(this.catchCallback); - - // Set the AudioRecorder instance to null. - audioRecorder = undefined; - } -} - -``` - -### Full-Process Scenario of Pure Video Recording - -The full video recording process includes creating an instance, setting recording parameters, obtaining the input surface, starting, pausing, resuming, and stopping recording, and releasing resources. - -The video range that can be set for the video recording parameters is restricted by the codec performance of the device and the performance of the camera subsystem. - -``` -import media from '@ohos.multimedia.media' -import camera from '@ohos.multimedia.camera' -import mediaLibrary from '@ohos.multimedia.mediaLibrary' - -export class VideoRecorderDemo { - private testFdNumber; // Used to save the FD address. - - // Obtain the FD corresponding to fileName of the recorded file. The media library capability is required. To use the media library, configure the following permissions: ohos.permission.MEDIA_LOCATION, ohos.permission.WRITE_MEDIA, and ohos.permission.READ_MEDIA. - async getFd(fileName) { - // For details about the implementation mode, see the media library documentation. - this.testFdNumber = "fd://" + fdNumber.toString(); // e.g. fd://54 - } - - // Error callback triggered in the case of an error in the promise mode. - failureCallback(error) { - console.info('error happened, error message is ' + error.message); - } - - // Error callback triggered in the case of an exception in the promise mode. - catchCallback(error) { - console.info('catch error happened, error message is ' + error.message); - } - - async videoRecorderDemo() { - let videoRecorder; // Assign a value to the empty VideoRecorder instance upon a successful call of createAVRecorder(). - let surfaceID; // The surface ID is obtained by calling getInputSurface and transferred to the videoOutput object of the camera. - await this.getFd('01.mp4'); - - // Configure the parameters related to pure video recording based on those supported by the hardware device. - let videoProfile = { - fileFormat : media.ContainerFormatType.CFT_MPEG_4, - videoBitrate : 2000000, - videoCodec : media.CodecMimeType.VIDEO_MPEG4, - videoFrameWidth : 640, - videoFrameHeight : 480, - videoFrameRate : 30 - } - let videoConfig = { - videoSourceType : media.VideoSourceType.VIDEO_SOURCE_TYPE_SURFACE_YUV, - profile : videoProfile, - url : 'fd://', - rotation : 0, - location : { latitude : 30, longitude : 130 } - } - - // Create a VideoRecorder instance. - await media.createAVRecorder().then((recorder) => { - console.info('case createVideoRecorder called'); - if (typeof (recorder) != 'undefined') { - videoRecorder = recorder; - console.info('createVideoRecorder success'); - } else { - console.info('createVideoRecorder failed'); - } - }, this.failureCallback).catch(this.catchCallback); - - // After the instance is created, use the on('stateChange') and on('error') callbacks to listen for state changes and errors. - videoRecorder.on('stateChange', async (state, reason) => { - console.info('case state has changed, new state is :' + state); - switch (state) { - // Your can set the desired behavior in different states as required. - case 'idle': - // This state is reported upon a successful call of rest() or create(). - break; - case 'prepared': - // This state is reported upon a successful call of prepare(). - break; - case 'started': - // This state is reported upon a successful call of start(). - break; - case 'paused': - // This state is reported upon a successful call of pause(). - break; - case 'stopped': - // This state is reported upon a successful call of stop(). - break; - case 'released': - // This state is reported upon a successful call of release(). - break; - case 'error': - // The error state indicates that an error occurs at the bottom layer. You must rectify the fault and create a VideoRecorder instance again. - break; - default: - console.info('case state is unknown'); - } - }); - videoRecorder.on('error', (err) => { - // Listen for non-interface errors. - console.info('case avRecorder.on(error) called, errMessage is ' + err.message); - }); - - // Call prepare() to prepare for recording. The bottom layer determines whether to record audio, video, or audio and video based on the input parameters of prepare(). - await videoRecorder.prepare(videoConfig).then(() => { - console.info('prepare success'); - }, this.failureCallback).catch(this.catchCallback); - - // If video recording is involved, call getInputSurface to obtain the input surface and pass the returned surface ID to the related camera API. - await videoRecorder.getInputSurface().then((surface) => { - console.info('getInputSurface success'); - surfaceID = surface; // The surfaceID is passed into createVideoOutput() of the camera as an input parameter. - }, this.failureCallback).catch(this.catchCallback); - - // Video recording depends on camera-related APIs. The following operations can be performed only after the video output start API is invoked. - // Start video recording. - await videoRecorder.start().then(() => { - console.info('start success'); - }, this.failureCallback).catch(this.catchCallback); - - // Pause video recording before the video output stop API of the camera is invoked. - await videoRecorder.pause().then(() => { - console.info('pause success'); - }, this.failureCallback).catch(this.catchCallback); - - // Resume video recording after the video output start API of the camera is invoked. - await videoRecorder.resume().then(() => { - console.info('resume success'); - }, this.failureCallback).catch(this.catchCallback); - - // Stop video recording after the video output stop API of the camera is invoked. - await videoRecorder.stop().then(() => { - console.info('stop success'); - }, this.failureCallback).catch(this.catchCallback); - - // Reset the recording configuration. - await videoRecorder.reset().then(() => { - console.info('reset success'); - }, this.failureCallback).catch(this.catchCallback); - - // Disable the listeners. The configured callbacks will be invalid after release() is invoked, even if you do not call off(). - videoRecorder.off('stateChange'); - videoRecorder.off('error'); - - // Release the video recording resources and camera object resources. - await videoRecorder.release().then(() => { - console.info('release success'); - }, this.failureCallback).catch(this.catchCallback); - - // Set the VideoRecorder instance to null. - videoRecorder = undefined; - surfaceID = undefined; - } -} -``` diff --git a/en/application-dev/media/avsession-guidelines.md b/en/application-dev/media/avsession-guidelines.md deleted file mode 100644 index 8faf5557b0d44751b85a9f2bb11c921a4dd6d4f0..0000000000000000000000000000000000000000 --- a/en/application-dev/media/avsession-guidelines.md +++ /dev/null @@ -1,644 +0,0 @@ -# AVSession Development - -> **NOTE** -> -> All APIs of the **AVSession** module are system APIs and can be called only by system applications. - -## Development for the Session Access End - -### Basic Concepts -- **AVMetadata**: media data related attributes, including the IDs of the current media asset, previous media asset, and next media asset, title, author, album, writer, and duration. -- **AVSessionDescriptor**: descriptor about a media session, including the session ID, session type (audio/video), custom session name (**sessionTag**), and information about the corresponding application (**elementName**). -- **AVPlaybackState**: information related to the media playback state, including the playback state, position, speed, buffered time, loop mode, and whether the media asset is favorited (**isFavorite**). - -### Available APIs -The table below lists the APIs available for the development of the session access end. The APIs use either a callback or promise to return the result. The APIs listed below use a callback, which provide the same functions as their counterparts that use a promise. For details, see [AVSession Management](../reference/apis/js-apis-avsession.md). - -Table 1 Common APIs for session access end development - -| API | Description | -|----------------------------------------------------------------------------------|-------------| -| createAVSession(context: Context, tag: string, type: AVSessionType, callback: AsyncCallback\): void | Creates a session.| -| setAVMetadata(data: AVMetadata, callback: AsyncCallback\): void | Sets session metadata. | -| setAVPlaybackState(state: AVPlaybackState, callback: AsyncCallback\): void | Sets the playback state information. | -| setLaunchAbility(ability: WantAgent, callback: AsyncCallback\): void | Sets the launcher ability.| -| getController(callback: AsyncCallback\): void | Obtains the controller of this session.| -| getOutputDevice(callback: AsyncCallback\): void | Obtains the output device information. | -| activate(callback: AsyncCallback\): void | Activates this session. | -| destroy(callback: AsyncCallback\): void | Destroys this session. | - -### How to Develop -1. Import the modules. - - ```js - import avSession from '@ohos.multimedia.avsession'; - import wantAgent from '@ohos.wantAgent'; - import featureAbility from '@ohos.ability.featureAbility'; - ``` - -2. Create and activate a session. - - ```js - // Define global variables. - let mediaFavorite = false; - let currentSession = null; - let context = featureAbility.getContext(); - - // Create an audio session. - avSession.createAVSession(context, "AudioAppSample", 'audio').then((session) => { - currentSession = session; - currentSession.activate(); // Activate the session. - }).catch((err) => { - console.info(`createAVSession : ERROR : ${err.message}`); - }); - ``` - -3. Set the session information, including: - -- Session metadata. In addition to the current media asset ID (mandatory), you can set the title, album, author, duration, and previous/next media asset ID. For details about the session metadata, see **AVMetadata** in the API document. -- Launcher ability, which is implemented by calling an API of [WantAgent](../reference/apis/js-apis-wantAgent.md). Generally, **WantAgent** is used to encapsulate want information. -- Playback state information. - - ```js - // Set the session metadata. - let metadata = { - assetId: "121278", - title: "lose yourself", - artist: "Eminem", - author: "ST", - album: "Slim shady", - writer: "ST", - composer: "ST", - duration: 2222, - mediaImage: "https://www.example.com/example.jpg", // Set it based on your project requirements. - subtitle: "8 Mile", - description: "Rap", - lyric: "https://www.example.com/example.lrc", // Set it based on your project requirements. - previousAssetId: "121277", - nextAssetId: "121279", - }; - currentSession.setAVMetadata(metadata).then(() => { - console.info('setAVMetadata successfully'); - }).catch((err) => { - console.info(`setAVMetadata : ERROR : ${err.message}`); - }); - ``` - - ```js - // Set the launcher ability. - let wantAgentInfo = { - wants: [ - { - bundleName: "com.neu.setResultOnAbilityResultTest1", - abilityName: "com.example.test.MainAbility", - } - ], - operationType: wantAgent.OperationType.START_ABILITIES, - requestCode: 0, - wantAgentFlags:[wantAgent.WantAgentFlags.UPDATE_PRESENT_FLAG] - } - - wantAgent.getWantAgent(wantAgentInfo).then((agent) => { - currentSession.setLaunchAbility(agent).then(() => { - console.info('setLaunchAbility successfully'); - }).catch((err) => { - console.info(`setLaunchAbility : ERROR : ${err.message}`); - }); - }); - ``` - - ```js - // Set the playback state information. - let PlaybackState = { - state: avSession.PlaybackState.PLAYBACK_STATE_STOP, - speed: 1.0, - position:{elapsedTime: 0, updateTime: (new Date()).getTime()}, - bufferedTime: 1000, - loopMode: avSession.LoopMode.LOOP_MODE_SEQUENCE, - isFavorite: false, - }; - currentSession.setAVPlaybackState(PlaybackState).then(() => { - console.info('setAVPlaybackState successfully'); - }).catch((err) => { - console.info(`setAVPlaybackState : ERROR : ${err.message}`); - }); - ``` - - ```js - // Obtain the controller of this session. - currentSession.getController().then((selfController) => { - console.info('getController successfully'); - }).catch((err) => { - console.info(`getController : ERROR : ${err.message}`); - }); - ``` - - ```js - // Obtain the output device information. - currentSession.getOutputDevice().then((outputInfo) => { - console.info(`getOutputDevice successfully, deviceName : ${outputInfo.deviceName}`); - }).catch((err) => { - console.info(`getOutputDevice : ERROR : ${err.message}`); - }); - ``` - -4. Subscribe to control command events. - - ```js - // Subscribe to the 'play' command event. - currentSession.on('play', () => { - console.log ("Call AudioPlayer.play."); - // Set the playback state information. - currentSession.setAVPlaybackState({state: avSession.PlaybackState.PLAYBACK_STATE_PLAY}).then(() => { - console.info('setAVPlaybackState successfully'); - }).catch((err) => { - console.info(`setAVPlaybackState : ERROR : ${err.message}`); - }); - }); - - - // Subscribe to the 'pause' command event. - currentSession.on('pause', () => { - console.log ("Call AudioPlayer.pause."); - // Set the playback state information. - currentSession.setAVPlaybackState({state: avSession.PlaybackState.PLAYBACK_STATE_PAUSE}).then(() => { - console.info('setAVPlaybackState successfully'); - }).catch((err) => { - console.info(`setAVPlaybackState : ERROR : ${err.message}`); - }); - }); - - // Subscribe to the 'stop' command event. - currentSession.on('stop', () => { - console.log ("Call AudioPlayer.stop."); - // Set the playback state information. - currentSession.setAVPlaybackState({state: avSession.PlaybackState.PLAYBACK_STATE_STOP}).then(() => { - console.info('setAVPlaybackState successfully'); - }).catch((err) => { - console.info(`setAVPlaybackState : ERROR : ${err.message}`); - }); - }); - - // Subscribe to the 'playNext' command event. - currentSession.on('playNext', () => { - // When the media file is not ready, download and cache the media file, and set the 'PREPARE' state. - currentSession.setAVPlaybackState({state: avSession.PlaybackState.PLAYBACK_STATE_PREPARE}).then(() => { - console.info('setAVPlaybackState successfully'); - }).catch((err) => { - console.info(`setAVPlaybackState : ERROR : ${err.message}`); - }); - // The media file is obtained. - currentSession.setAVMetadata({assetId: '58970105', title: 'See you tomorrow'}).then(() => { - console.info('setAVMetadata successfully'); - }).catch((err) => { - console.info(`setAVMetadata : ERROR : ${err.message}`); - }); - console.log ("Call AudioPlayer.play."); - // Set the playback state information. - let time = (new Date()).getTime(); - currentSession.setAVPlaybackState({state: avSession.PlaybackState.PLAYBACK_STATE_PLAY, position: {elapsedTime: 0, updateTime: time}, bufferedTime:2000}).then(() => { - console.info('setAVPlaybackState successfully'); - }).catch((err) => { - console.info(`setAVPlaybackState : ERROR : ${err.message}`); - }); - }); - - // Subscribe to the 'fastForward' command event. - currentSession.on('fastForward', () => { - console.log("Call AudioPlayer for fast forwarding."); - // Set the playback state information. - currentSession.setAVPlaybackState({speed: 2.0}).then(() => { - console.info('setAVPlaybackState successfully'); - }).catch((err) => { - console.info(`setAVPlaybackState : ERROR : ${err.message}`); - }); - }); - - // Subscribe to the 'seek' command event. - currentSession.on('seek', (time) => { - console.log("Call AudioPlayer.seek."); - // Set the playback state information. - currentSession.setAVPlaybackState({position: {elapsedTime: time, updateTime: (new Data()).getTime()}}).then(() => { - console.info('setAVPlaybackState successfully'); - }).catch((err) => { - console.info(`setAVPlaybackState : ERROR : ${err.message}`); - }); - }); - - // Subscribe to the 'setSpeed' command event. - currentSession.on('setSpeed', (speed) => { - console.log(`Call AudioPlayer to set the speed to ${speed}`); - // Set the playback state information. - currentSession.setAVPlaybackState({speed: speed}).then(() => { - console.info('setAVPlaybackState successfully'); - }).catch((err) => { - console.info(`setAVPlaybackState : ERROR : ${err.message}`); - }); - }); - - // Subscribe to the 'setLoopMode' command event. - currentSession.on('setLoopMode', (mode) => { - console.log(`The application switches to the loop mode ${mode}`); - // Set the playback state information. - currentSession.setAVPlaybackState({loopMode: mode}).then(() => { - console.info('setAVPlaybackState successfully'); - }).catch((err) => { - console.info(`setAVPlaybackState : ERROR : ${err.message}`); - }); - }); - - // Subscribe to the 'toggleFavorite' command event. - currentSession.on('toggleFavorite', (assetId) => { - console.log(`The application favorites ${assetId}.`); - // Perform the switch based on the last status. - let favorite = mediaFavorite == false ? true : false; - currentSession.setAVPlaybackState({isFavorite: favorite}).then(() => { - console.info('setAVPlaybackState successfully'); - }).catch((err) => { - console.info(`setAVPlaybackState : ERROR : ${err.message}`); - }); - mediaFavorite = favorite; - }); - - // Subscribe to the key event. - currentSession.on('handleKeyEvent', (event) => { - console.log(`User presses the key ${event.keyCode}`); - }); - - // Subscribe to output device changes. - currentSession.on('outputDeviceChange', (device) => { - console.log(`Output device changed to ${device.deviceName}`); - }); - ``` - -5. Release resources. - - ```js - // Unsubscribe from the events. - currentSession.off('play'); - currentSession.off('pause'); - currentSession.off('stop'); - currentSession.off('playNext'); - currentSession.off('playPrevious'); - currentSession.off('fastForward'); - currentSession.off('rewind'); - currentSession.off('seek'); - currentSession.off('setSpeed'); - currentSession.off('setLoopMode'); - currentSession.off('toggleFavorite'); - currentSession.off('handleKeyEvent'); - currentSession.off('outputDeviceChange'); - - // Deactivate the session and destroy the object. - currentSession.deactivate().then(() => { - currentSession.destroy(); - }); - ``` - -### Verification -Touch the play, pause, or next button on the media application. Check whether the media playback state changes accordingly. - -### FAQs - -1. Session Service Exception -- Symptoms - - The session service is abnormal, and the application cannot obtain a response from the session service. For example, the session service is not running or the communication with the session service fails. The error message "Session service exception" is displayed. - -- Possible causes - - The session service is killed during session restart. - -- Solution - - (1) The system retries the operation automatically. If the error persists for 3 seconds or more, stop the operation on the session or controller. - - (2) Destroy the current session or session controller and re-create it. If the re-creation fails, stop the operation on the session. - -2. Session Does Not Exist -- Symptoms - - Parameters are set for or commands are sent to the session that does not exist. The error message "The session does not exist" is displayed. - -- Possible causes - - The session has been destroyed, and no session record exists on the server. - -- Solution - - (1) If the error occurs on the application, re-create the session. If the error occurs on Media Controller, stop sending query or control commands to the session. - - (2) If the error occurs on the session service, query the current session record and pass the correct session ID when creating the controller. - -3. Session Not Activated -- Symptoms - - A control command or event is sent to the session when it is not activated. The error message "The session not active" is displayed. - -- Possible causes - - The session is in the inactive state. - -- Solution - - Stop sending the command or event. Subscribe to the session activation status, and resume the sending when the session is activated. - -## Development for the Session Control End (Media Controller) - -### Basic Concepts -- Remote projection: A local media session is projected to a remote device. The local controller sends commands to control media playback on the remote device. -- Sending key events: The controller controls media playback by sending key events. -- Sending control commands: The controller controls media playback by sending control commands. -- Sending system key events: A system application calls APIs to send system key events to control media playback. -- Sending system control commands: A system application calls APIs to send system control commands to control media playback. - -### Available APIs - -The table below lists the APIs available for the development of the session control end. The APIs use either a callback or promise to return the result. The APIs listed below use a callback, which provide the same functions as their counterparts that use a promise. For details, see [AVSession Management](../reference/apis/js-apis-avsession.md). - -Table 2 Common APIs for session control end development - -| API | Description | -| ------------------------------------------------------------------------------------------------ | ----------------- | -| getAllSessionDescriptors(callback: AsyncCallback\>>): void | Obtains the descriptors of all sessions. | -| createController(sessionId: string, callback: AsyncCallback\): void | Creates a controller. | -| sendAVKeyEvent(event: KeyEvent, callback: AsyncCallback\): void | Sends a key event. | -| getLaunchAbility(callback: AsyncCallback\): void | Obtains the launcher ability. | -| sendControlCommand(command: AVControlCommand, callback: AsyncCallback\): void | Sends a control command. | -| sendSystemAVKeyEvent(event: KeyEvent, callback: AsyncCallback\): void | Send a system key event. | -| sendSystemControlCommand(command: AVControlCommand, callback: AsyncCallback\): void | Sends a system control command. | -| castAudio(session: SessionToken \| 'all', audioDevices: Array\, callback: AsyncCallback\): void | Casts the media session to a remote device.| - -### How to Develop -1. Import the modules. - - ```js - import avSession from '@ohos.multimedia.avsession'; - import {Action, KeyEvent} from '@ohos.multimodalInput.KeyEvent'; - import wantAgent from '@ohos.wantAgent'; - import audio from '@ohos.multimedia.audio'; - ``` - -2. Obtain the session descriptors and create a controller. - - ```js - // Define global variables. - let g_controller = new Array(); - let g_centerSupportCmd:Set = new Set(['play', 'pause', 'playNext', 'playPrevious', 'fastForward', 'rewind', 'seek','setSpeed', 'setLoopMode', 'toggleFavorite']); - let g_validCmd:Set; - - // Obtain the session descriptors and create a controller. - avSession.getAllSessionDescriptors().then((descriptors) => { - descriptors.forEach((descriptor) => { - avSession.createController(descriptor.sessionId).then((controller) => { - g_controller.push(controller); - }).catch((err) => { - console.error('createController error'); - }); - }); - }).catch((err) => { - console.error('getAllSessionDescriptors error'); - }); - - // Subscribe to the 'sessionCreate' event and create a controller. - avSession.on('sessionCreate', (session) => { - // After a session is added, you must create a controller. - avSession.createController(session.sessionId).then((controller) => { - g_controller.push(controller); - }).catch((err) => { - console.info(`createController : ERROR : ${err.message}`); - }); - }); - ``` - -3. Subscribe to the session state and service changes. - - ```js - // Subscribe to the 'activeStateChange' event. - controller.on('activeStateChange', (isActive) => { - if (isActive) { - console.log ("The widget corresponding to the controller is highlighted."); - } else { - console.log("The widget corresponding to the controller is invalid."); - } - }); - - // Subscribe to the 'sessionDestroy' event to enable Media Controller to get notified when the session dies. - controller.on('sessionDestroy', () => { - console.info('on sessionDestroy : SUCCESS '); - controller.destroy().then(() => { - console.info('destroy : SUCCESS '); - }).catch((err) => { - console.info(`destroy : ERROR :${err.message}`); - }); - }); - - // Subscribe to the 'sessionDestroy' event to enable the application to get notified when the session dies. - avSession.on('sessionDestroy', (session) => { - let index = g_controller.findIndex((controller) => { - return controller.sessionId == session.sessionId; - }); - if (index != 0) { - g_controller[index].destroy(); - g_controller.splice(index, 1); - } - }); - - // Subscribe to the 'topSessionChange' event. - avSession.on('topSessionChange', (session) => { - let index = g_controller.findIndex((controller) => { - return controller.sessionId == session.sessionId; - }); - // Place the session on the top. - if (index != 0) { - g_controller.sort((a, b) => { - return a.sessionId == session.sessionId ? -1 : 0; - }); - } - }); - - // Subscribe to the 'sessionServiceDie' event. - avSession.on('sessionServiceDie', () => { - // The server is abnormal, and the application clears resources. - console.log("Server exception"); - }) - ``` - -4. Subscribe to media session information changes. - - ```js - // Subscribe to metadata changes. - let metaFilter = ['assetId', 'title', 'description']; - controller.on('metadataChange', metaFilter, (metadata) => { - console.info(`on metadataChange assetId : ${metadata.assetId}`); - }); - - // Subscribe to playback state changes. - let playbackFilter = ['state', 'speed', 'loopMode']; - controller.on('playbackStateChange', playbackFilter, (playbackState) => { - console.info(`on playbackStateChange state : ${playbackState.state}`); - }); - - // Subscribe to supported command changes. - controller.on('validCommandChange', (cmds) => { - console.info(`validCommandChange : SUCCESS : size : ${cmds.size}`); - console.info(`validCommandChange : SUCCESS : cmds : ${cmds.values()}`); - g_validCmd.clear(); - for (let c of g_centerSupportCmd) { - if (cmds.has(c)) { - g_validCmd.add(c); - } - } - }); - - // Subscribe to output device changes. - controller.on('outputDeviceChange', (device) => { - console.info(`on outputDeviceChange device isRemote : ${device.isRemote}`); - }); - ``` - -5. Control the session behavior. - - ```js - // When the user touches the play button, the control command 'play' is sent to the session. - if (g_validCmd.has('play')) { - controller.sendControlCommand({command:'play'}).then(() => { - console.info('sendControlCommand successfully'); - }).catch((err) => { - console.info(`sendControlCommand : ERROR : ${err.message}`); - }); - } - - // When the user selects the single loop mode, the corresponding control command is sent to the session. - if (g_validCmd.has('setLoopMode')) { - controller.sendControlCommand({command: 'setLoopMode', parameter: avSession.LoopMode.LOOP_MODE_SINGLE}).then(() => { - console.info('sendControlCommand successfully'); - }).catch((err) => { - console.info(`sendControlCommand : ERROR : ${err.message}`); - }); - } - - // Send a key event. - let keyItem = {code: 0x49, pressedTime: 123456789, deviceId: 0}; - let event = {action: 2, key: keyItem, keys: [keyItem]}; - controller.sendAVKeyEvent(event).then(() => { - console.info('sendAVKeyEvent Successfully'); - }).catch((err) => { - console.info(`sendAVKeyEvent : ERROR : ${err.message}`); - }); - - // The user touches the blank area on the widget to start the application. - controller.getLaunchAbility().then((want) => { - console.log("Starting the application in the foreground"); - }).catch((err) => { - console.info(`getLaunchAbility : ERROR : ${err.message}`); - }); - - // Send the system key event. - let keyItem = {code: 0x49, pressedTime: 123456789, deviceId: 0}; - let event = {action: 2, key: keyItem, keys: [keyItem]}; - avSession.sendSystemAVKeyEvent(event).then(() => { - console.info('sendSystemAVKeyEvent Successfully'); - }).catch((err) => { - console.info(`sendSystemAVKeyEvent : ERROR : ${err.message}`); - }); - - // Send a system control command to the top session. - let avcommand = {command: 'toggleFavorite', parameter: "false"}; - avSession.sendSystemControlCommand(avcommand).then(() => { - console.info('sendSystemControlCommand successfully'); - }).catch((err) => { - console.info(`sendSystemControlCommand : ERROR : ${err.message}`); - }); - - // Cast the session to another device. - let audioManager = audio.getAudioManager(); - let audioDevices; - await audioManager.getDevices(audio.DeviceFlag.OUTPUT_DEVICES_FLAG).then((data) => { - audioDevices = data; - console.info('Promise returned to indicate that the device list is obtained.'); - }).catch((err) => { - console.info(`getDevices : ERROR : ${err.message}`); - }); - - avSession.castAudio('all', audioDevices).then(() => { - console.info('createController : SUCCESS'); - }).catch((err) => { - console.info(`createController : ERROR : ${err.message}`); - }); - ``` - -6. Release resources. - - ```js - // Unsubscribe from the events. - controller.off('metadataChange'); - controller.off('playbackStateChange'); - controller.off('sessionDestroy'); - controller.off('activeStateChange'); - controller.off('validCommandChange'); - controller.off('outputDeviceChange'); - - // Destroy the controller. - controller.destroy().then(() => { - console.info('destroy : SUCCESS '); - }).catch((err) => { - console.info(`destroy : ERROR : ${err.message}`); - }); - ``` - -### Verification -When you touch the play, pause, or next button in Media Controller, the playback state of the application changes accordingly. - -### FAQs -1. Controller Does Not Exist -- Symptoms - - A control command or an event is sent to the controller that does not exist. The error message "The session controller does not exist" is displayed. - -- Possible causes - - The controller has been destroyed. - -- Solution - - Query the session record and create the corresponding controller. - -2. Remote Session Connection Failure -- Symptoms - - The communication between the local session and the remote session fails. The error information "The remote session connection failed" is displayed. - -- Possible causes - - The communication between devices is interrupted. - -- Solution - - Stop sending control commands to the session. Subscribe to output device changes, and resume the sending when the output device is changed. - -3. Invalid Session Command -- Symptoms - - The control command or event sent to the session is not supported. The error message "Invalid session command" is displayed. - -- Possible causes - - The session does not support this command. - -- Solution - - Stop sending the command or event. Query the commands supported by the session, and send a command supported. - -4. Too Many Commands or Events -- Symptoms - - The session client sends too many messages or commands to the server in a period of time, causing the server to be overloaded. The error message "Command or event overload" is displayed. - -- Possible causes - - The server is overloaded with messages or events. - -- Solution - - Control the frequency of sending commands or events. diff --git a/en/application-dev/media/avsession-overview.md b/en/application-dev/media/avsession-overview.md index c46211765644330ac26c1154f181904c2db4c3d0..766e642eebc2ba861bf6aceca5f9ea702f99d74f 100644 --- a/en/application-dev/media/avsession-overview.md +++ b/en/application-dev/media/avsession-overview.md @@ -1,56 +1,50 @@ # AVSession Overview -> **NOTE** -> -> All APIs of the **AVSession** module are system APIs and can be called only by system applications. +The Audio and Video Session (AVSession) service is used to manage the playback behavior of all audio and video applications in the system in a unified manner. For example, it allows only one audio application in the playing state. -## Overview +Audio and video applications access the AVSession service and send application data (for example, a song that is being played and playback state) to it. Through a controller, the user can choose another application or device to continue the playback. If an application does not access the AVSession service, its playback will be forcibly interrupted when it switches to the background. - AVSession, short for audio and video session, is also known as media session. - - Application developers can use the APIs provided by the **AVSession** module to connect their audio and video applications to the system's Media Controller. - - System developers can use the APIs provided by the **AVSession** module to display media information of system audio and video applications and carry out unified playback control. +To implement background playback, you must request a continuous task to prevent the task from being suspended. For details, see [Continuous Task Development](../task-management/continuous-task-dev-guide.md). - You can implement the following features through the **AVSession** module: +## Basic Concepts - 1. Unified playback control entry +Be familiar with the following basic concepts before development: - If there are multiple audio and video applications on the device, users need to switch to and access different applications to control media playback. With AVSession, a unified playback control entry of the system (such as Media Controller) is used for playback control of these audio and video applications. No more switching is required. +- AVSession - 2. Better background application management + For AVSession, one end is the audio and video applications under control, and the other end is a controller (for example, Media Controller or AI Voice). AVSession provides a channel for information exchange between the application and controller. - When an application running in the background automatically starts audio playback, it is difficult for users to locate the application. With AVSession, users can quickly find the application that plays the audio clip in Media Controller. +- Provider -## Basic Concepts + An audio and video application that accesses the AVSession service. After accessing AVSession, the audio and video application must provide the media information, for example, the name of the item to play and the playback state, to AVSession. Through AVSession, the application also receives control commands from the controller and responds accordingly. -- AVSession +- Controller + + A system application that accesses AVSession to provide global control on audio and video playback behavior. Typical controllers on OpenHarmony devices are Media Controller and AI Voice. The following sections use Media Controller as an example of the controller. After accessing AVSession, the controller obtains the latest media information and sends control commands to the audio and video applications through AVSession. - A channel used for information exchange between applications and Media Controller. For AVSession, one end is the media application under control, and the other end is Media Controller. Through AVSession, an application can transfer the media playback information to Media Controller and receive control commands from Media Controller. - - AVSessionController - Object that controls media sessions and thereby controls the playback behavior of applications. Through AVSessionController, Media Controller can control the playback behavior of applications, obtain playback information, and send control commands. It can also monitor the playback state of applications to ensure synchronization of the media session information. + An object that controls the playback behavior of the provider. It obtains the playback information of the audio and video application and listens for the application playback changes to synchronize the AVSession information between the application and controller. The controller is the holder of an **AVSessionController** object. + +- AVSessionManager + + An object that provides the capability of managing sessions. It can create an **AVSession** object, create an **AVSessionController** object, send control commands, and listen for session state changes. + -- Media Controller - - Holder of AVSessionController. Through AVSessionController, Media Controller sends commands to control media playback of applications. +## AVSession Interaction Process -## Implementation Principle +AVSessions are classified into local AVSessions and distributed AVSessions. -The **AVSession** module provides two classes: **AVSession** and **AVSessionController**. +![AVSession Interaction Process](figures/avsession-interaction-process.png) -**Figure 1** AVSession interaction +- Local AVSession -![en-us_image_avsession](figures/en-us_image_avsession.png) + Local AVSession establishes a connection between the provider and controller in the local device, so as to implement unified playback control and media information display for audio and video applications in the system. -- Interaction between the application and Media Controller: First, an audio application creates an **AVSession** object and sets session information, including media metadata, launcher ability, and playback state information. Then, Media Controller creates an **AVSessionController** object to obtain session-related information and send the 'play' command to the audio application. Finally, the audio application responds to the command and updates the playback state. +- Distributed AVSession -- Distributed projection: When a connected device creates a local session, Media Controller or the audio application can select another device to be projected based on the device list, synchronize the local session to the remote device, and generate a controllable remote session. The remote session is controlled by sending control commands to the remote device's application through its AVSessionController. + Distributed AVSession establishes a connection between the provider and controller in the cross-device scenario, so as to implement cross-device playback control and media information display for audio and video applications in the system. For example, you can project the content played on device A to device B and perform playback control on device B. ## Constraints -- The playback information displayed in Media Controller is the media information proactively written by the media application to AVSession. -- Media Controller controls the playback of a media application based on the responses of the media application to control commands. -- AVSession can transmit media playback information and control commands. It does not display information or execute control commands. -- Do not develop Media Controller for common applications. For common audio and video applications running on OpenHarmony, the default control end is Media Controller, which is a system application. You do not need to carry out additional development for Media Controller. -- If you want to develop your own system running OpenHarmony, you can develop your own Media Controller. -- For better background management of audio and video applications, the **AVSession** module enforces background control for applications. Only applications that have accessed AVSession can play audio in the background. Otherwise, the system forcibly pauses the playback when an application switches to the background. +The AVSession service manages the playback behavior of all audio and video applications in the system. To continue the playback after switching to the background, the audio and video applications must access the AVSession service. diff --git a/en/application-dev/media/camera-device-input.md b/en/application-dev/media/camera-device-input.md new file mode 100644 index 0000000000000000000000000000000000000000..3702e16760c002010c50da236d4ef9c2af079e5e --- /dev/null +++ b/en/application-dev/media/camera-device-input.md @@ -0,0 +1,82 @@ +# Device Input Management + +Before developing a camera application, you must create an independent camera object. The application invokes and controls the camera object to perform basic operations such as preview, photographing, and video recording. + +## How to Develop + +Read [Camera](../reference/apis/js-apis-camera.md) for the API reference. + +1. Import the camera module, which provides camera-related attributes and methods. + + ```ts + import camera from '@ohos.multimedia.camera'; + ``` + +2. Call **getCameraManager()** to obtain a **CameraManager** object. + + ```ts + let cameraManager; + let context: any = getContext(this); + cameraManager = camera.getCameraManager(context) + ``` + + > **NOTE** + > + > If obtaining the object fails, the camera hardware may be occupied or unusable. If it is occupied, wait until it is released. + +3. Call **getSupportedCameras()** in the **CameraManager** class to obtain the list of cameras supported by the current device. The list stores the IDs of all cameras supported. If the list is not empty, each ID in the list can be used to create an independent camera object. Otherwise, no camera is available for the current device and subsequent operations cannot be performed. + + ```ts + let cameraArray = cameraManager.getSupportedCameras(); + if (cameraArray.length <= 0) { + console.error("cameraManager.getSupportedCameras error"); + return; + } + + for (let index = 0; index < cameraArray.length; index++) { + console.info('cameraId : ' + cameraArray[index].cameraId); // Obtain the camera ID. + console.info('cameraPosition : ' + cameraArray[index].cameraPosition); // Obtain the camera position. + console.info('cameraType : ' + cameraArray[index].cameraType); // Obtain the camera type. + console.info('connectionType : ' + cameraArray[index].connectionType); // Obtain the camera connection type. + } + ``` + +4. Call **getSupportedOutputCapability()** to obtain all output streams supported by the current device, such as preview streams and photo streams. The output stream is in each **profile** field under **CameraOutputCapability**. + + ```ts + // Create a camera input stream. + let cameraInput; + try { + cameraInput = cameraManager.createCameraInput(cameraArray[0]); + } catch (error) { + console.error('Failed to createCameraInput errorCode = ' + error.code); + } + // Listen for CameraInput errors. + let cameraDevice = cameraArray[0]; + cameraInput.on('error', cameraDevice, (error) => { + console.info(`Camera input error code: ${error.code}`); + }) + // Open the camera. + await cameraInput.open(); + // Obtain the output stream capabilities supported by the camera. + let cameraOutputCapability = cameraManager.getSupportedOutputCapability(cameraArray[0]); + if (!cameraOutputCapability) { + console.error("cameraManager.getSupportedOutputCapability error"); + return; + } + console.info("outputCapability: " + JSON.stringify(cameraOutputCapability)); + ``` + + +## Status Listening + +During camera application development, you can listen for the camera status, including the appearance of a new camera, removal of a camera, and availability of a camera. The camera ID and camera status are used in the callback function. When a new camera appears, the new camera can be added to the supported camera list. + +Register the 'cameraStatus' event and return the listening result through a callback, which carries the **CameraStatusInfo** parameter. For details about the parameter, see [CameraStatusInfo](../reference/apis/js-apis-camera.md#camerastatusinfo). + +```ts +cameraManager.on('cameraStatus', (cameraStatusInfo) => { + console.info(`camera: ${cameraStatusInfo.camera.cameraId}`); + console.info(`status: ${cameraStatusInfo.status}`); +}) +``` diff --git a/en/application-dev/media/camera-metadata.md b/en/application-dev/media/camera-metadata.md new file mode 100644 index 0000000000000000000000000000000000000000..8fdeff1df08f624374f2a2a5cee32b99b2c41e03 --- /dev/null +++ b/en/application-dev/media/camera-metadata.md @@ -0,0 +1,66 @@ +# Camera Metadata + +Metadata is the description and context of image information returned by the camera application. It provides detailed data for the image information, for example, coordinates of a viewfinder frame for identifying a portrait in a photo or a video. + +Metadata uses a tag (key) to find the corresponding data during the transfer of parameters and configurations, reducing memory copy operations. + +## How to Develop + +Read [Camera](../reference/apis/js-apis-camera.md) for the API reference. + +1. Obtain the metadata types supported by the current device from **supportedMetadataObjectTypes** in **CameraOutputCapability**, and then use **createMetadataOutput()** to create a metadata output stream. + + ```ts + let metadataObjectTypes = cameraOutputCapability.supportedMetadataObjectTypes; + let metadataOutput; + try { + metadataOutput = cameraManager.createMetadataOutput(metadataObjectTypes); + } catch (error) { + // If the operation fails, error.code is returned and processed. + console.info(error.code); + } + ``` + +2. Call **start()** to start outputting metadata. If the call fails, an error code is returned. For details, see [Camera Error Codes](../reference/apis/js-apis-camera.md#cameraerrorcode). + + ```ts + metadataOutput.start().then(() => { + console.info('Callback returned with metadataOutput started.'); + }).catch((err) => { + console.info('Failed to metadataOutput start '+ err.code); + }); + ``` + +3. Call **stop()** to stop outputting metadata. If the call fails, an error code is returned. For details, see [Camera Error Codes](../reference/apis/js-apis-camera.md#cameraerrorcode). + + ```ts + metadataOutput.stop().then(() => { + console.info('Callback returned with metadataOutput stopped.'); + }).catch((err) => { + console.info('Failed to metadataOutput stop '+ err.code); + }); + ``` + +## Status Listening + +During camera application development, you can listen for the status of metadata objects and output stream. + +- Register the 'metadataObjectsAvailable' event to listen for metadata objects that are available. When a valid metadata object is detected, the callback function returns the metadata. This event can be registered when a **MetadataOutput** object is created. + + ```ts + metadataOutput.on('metadataObjectsAvailable', (metadataObjectArr) => { + console.info(`metadata output metadataObjectsAvailable`); + }) + ``` + + > **NOTE** + > + > Currently, only **FACE_DETECTION** is available for the metadata type. The metadata object is the rectangle of the recognized face, including the x-axis coordinate and y-axis coordinate of the upper left corner of the rectangle as well as the width and height of the rectangle. + +- Register the 'error' event to listen for metadata stream errors. The callback function returns an error code when an API is incorrectly used. For details about the error code types, see [Camera Error Codes](../reference/apis/js-apis-camera.md#cameraerrorcode). + + ```ts + metadataOutput.on('error', (metadataOutputError) => { + console.info(`Metadata output error code: ${metadataOutputError.code}`); + }) + ``` diff --git a/en/application-dev/media/camera-overview.md b/en/application-dev/media/camera-overview.md new file mode 100644 index 0000000000000000000000000000000000000000..03445ee6979c28fb4084a2f3c8186d77f14e5b89 --- /dev/null +++ b/en/application-dev/media/camera-overview.md @@ -0,0 +1,27 @@ +# Camera Overview + +With the APIs provided by the camera module of the multimedia subsystem, you can develop a camera application. The application accesses and operates the camera hardware to implement basic operations, such as preview, photographing, and video recording. It can also perform more operations, for example, controlling the flash and exposure time, and focusing or adjusting the focus. + +## Development Model + +The camera application invokes the camera hardware to collect and process image and video data, and output images and videos. It can be used when there are multiple lenses (such as wide-angle lens, long-focus lens, and ToF lens) in various service scenarios (such as different requirements on the resolution, format, and effect). + +The figure below illustrates the working process of the camera module. The working process can be summarized into three parts: input device management, session management, and output management. + +- During input device management, the camera application invokes the camera hardware to collect data and uses the data as an input stream. + +- During session management, you can configure an input stream to determine the camera to be used. You can also set parameters, such as the flash, exposure time, focus, and focus adjustment, to implement different shooting effects in various service scenarios. The application can switch between sessions to meet service requirements in different scenarios. + +- During output management, you can configure an output stream, which can be a preview stream, photo stream, or video stream. + +**Figure 1** Camera working process +![Camera Workflow](figures/camera-workflow.png) + +For better application development, you are also advised understanding the camera development model. + +**Figure 2** Camera development model +![Camera Development Model](figures/camera-development-model.png) + +The camera application controls the camera hardware to implement basic operations such as image display (preview), photo saving (photographing), and video recording. During the implementation, the camera service controls the camera hardware to collect and output data, and transmits the data to a specific module for processing through a BufferQueue at the bottom camera device hardware interface (HDI) layer. You can ignore the BufferQueue during application development. It is used to send the data processed by the bottom layer to the upper layer for image display. + +For example, in a video recording scenario, the recording service creates a video surface and provides it to the camera service for data transmission. The camera service controls the camera device to collect video data and generate a video stream. After processing the collected data at the HDI layer, the camera service transmits the video stream to the recording service through the surface. The recording service processes the video stream and saves it as a video file. Now video recording is complete. diff --git a/en/application-dev/media/camera-preparation.md b/en/application-dev/media/camera-preparation.md new file mode 100644 index 0000000000000000000000000000000000000000..eb504af9a69f65473f27de59a45a17891357be7f --- /dev/null +++ b/en/application-dev/media/camera-preparation.md @@ -0,0 +1,25 @@ +# Camera Development Preparations + +The main process of camera application development includes development preparations, device input management, session management, preview, photographing, and video recording. + +Before developing a camera application, you must request camera-related permissions (as described in the table below) to ensure that the application has the permission to access the camera hardware and other services. Before requesting the permission, ensure that the [basic principles for permission management](../security/accesstoken-overview.md#basic-principles-for-permission-management) are met. + + +| Permission| Description| Authorization Mode| +| -------- | -------- | -------- | +| ohos.permission.CAMERA | Allows an application to use the camera to take photos and record videos.| user_grant | +| ohos.permission.MICROPHONE | Allows an application to access the microphone.
This permission is required only if the application is used to record audio.| user_grant | +| ohos.permission.WRITE_MEDIA | Allows an application to read media files from and write media files into the user's external storage. This permission is optional.| user_grant | +| ohos.permission.READ_MEDIA | Allows an application to read media files from the user's external storage. This permission is optional.| user_grant | +| ohos.permission.MEDIA_LOCATION | Allows an application to access geographical locations in the user's media file. This permission is optional.| user_grant | + + +After configuring the permissions in the **module.json5** file, the application must call [abilityAccessCtrl.requestPermissionsFromUser](../reference/apis/js-apis-abilityAccessCtrl.md#requestpermissionsfromuser9) to check whether the required permissions are granted. If not, request the permissions from the user by displaying a dialog box. + + +For details about how to request and verify the permissions, see [Permission Application Guide](../security/accesstoken-guidelines.md). + + +> **NOTE** +> +> Even if the user has granted a permission, the application must check for the permission before calling an API protected by the permission. It should not persist the permission granted status, because the user can revoke the permission through the system application **Settings**. diff --git a/en/application-dev/media/camera-preview.md b/en/application-dev/media/camera-preview.md new file mode 100644 index 0000000000000000000000000000000000000000..e65f5dac8c96737b81b20703ce6ffa6fe7daa54b --- /dev/null +++ b/en/application-dev/media/camera-preview.md @@ -0,0 +1,87 @@ +# Camera Preview + +Preview is the image you see after you start the camera application but before you take photos or record videos. + +## How to Develop + +Read [Camera](../reference/apis/js-apis-camera.md) for the API reference. + +1. Create a surface. + + The XComponent, the capabilities of which are provided by the UI, offers the surface for preview streams. For details, see [XComponent](../reference/arkui-ts/ts-basic-components-xcomponent.md). + + ```ts + // Create an XComponentController object. + mXComponentController: XComponentController = new XComponentController; + build() { + Flex() { + // Create an XComponent. + XComponent({ + id: '', + type: 'surface', + libraryname: '', + controller: this.mXComponentController + }) + .onLoad(() => { + // Set the surface width and height (1920 x 1080). For details about how to set the preview size, see the preview resolutions supported by the current device, which are obtained from previewProfilesArray. + this.mXComponentController.setXComponentSurfaceSize({surfaceWidth:1920,surfaceHeight:1080}); + // Obtain the surface ID. + globalThis.surfaceId = this.mXComponentController.getXComponentSurfaceId(); + }) + .width('1920px') + .height('1080px') + } + } + ``` + +2. Call **previewProfiles()** in the **CameraOutputCapability** class to obtain the preview capabilities, in the format of an **previewProfilesArray** array, supported by the current device. Then call **createPreviewOutput()** to create a preview output stream, with the first parameter set to the first item in the **previewProfilesArray** array and the second parameter set to the surface ID obtained in step 1. + + ```ts + let previewProfilesArray = cameraOutputCapability.previewProfiles; + let previewOutput; + try { + previewOutput = cameraManager.createPreviewOutput(previewProfilesArray[0], surfaceId); + } + catch (error) { + console.error("Failed to create the PreviewOutput instance." + error); + } + ``` + +3. Call **start()** to start outputting the preview stream. If the call fails, an error code is returned. For details, see [Camera Error Codes](../reference/apis/js-apis-camera.md#cameraerrorcode). + + ```ts + previewOutput.start().then(() => { + console.info('Callback returned with previewOutput started.'); + }).catch((err) => { + console.info('Failed to previewOutput start '+ err.code); + }); + ``` + + +## Status Listening + +During camera application development, you can listen for the preview output stream status, including preview stream start, preview stream end, and preview stream output errors. + +- Register the 'frameStart' event to listen for preview start events This event can be registered when a **PreviewOutput** object is created and is triggered when the bottom layer starts exposure for the first time. The preview stream is started as long as a result is returned. + + ```ts + previewOutput.on('frameStart', () => { + console.info('Preview frame started'); + }) + ``` + +- Register the 'frameEnd' event to listen for preview end events. This event can be registered when a **PreviewOutput** object is created and is triggered when the last frame of preview ends. The preview stream ends as long as a result is returned. + + ```ts + previewOutput.on('frameEnd', () => { + console.info('Preview frame ended'); + }) + ``` + +- Register the 'error' event to listen for preview output errors. The callback function returns an error code when an API is incorrectly used. For details about the error code types, see [Camera Error Codes](../reference/apis/js-apis-camera.md#cameraerrorcode). + + ```ts + previewOutput.on('error', (previewOutputError) => { + console.info(`Preview output error code: ${previewOutputError.code}`); + }) + ``` diff --git a/en/application-dev/media/camera-recording-case.md b/en/application-dev/media/camera-recording-case.md new file mode 100644 index 0000000000000000000000000000000000000000..4d284f7e675fe0693240bbb678391147926652e7 --- /dev/null +++ b/en/application-dev/media/camera-recording-case.md @@ -0,0 +1,247 @@ +# Video Recording Sample + +## Development Process + +After obtaining the output stream capabilities supported by the camera, create a video stream. The development process is as follows: + +![Recording Development Process](figures/recording-development-process.png) + + +## Sample Code + +```ts +import camera from '@ohos.multimedia.camera' +import media from '@ohos.multimedia.media' + +// Create a CameraManager instance. +context: any = getContext(this) +let cameraManager = camera.getCameraManager(this.context) +if (!cameraManager) { + console.error("camera.getCameraManager error") + return; +} + +// Listen for camera status changes. +cameraManager.on('cameraStatus', (cameraStatusInfo) => { + console.log(`camera : ${cameraStatusInfo.camera.cameraId}`); + console.log(`status: ${cameraStatusInfo.status}`); +}) + +// Obtain the output stream capabilities supported by the camera. +let cameraOutputCap = cameraManager.getSupportedOutputCapability(cameraArray[0]); +if (!cameraOutputCap) { + console.error("cameraManager.getSupportedOutputCapability error") + return; +} +console.log("outputCapability: " + JSON.stringify(cameraOutputCap)); + +let previewProfilesArray = cameraOutputCap.previewProfiles; +if (!previewProfilesArray) { + console.error("createOutput previewProfilesArray == null || undefined") +} + +let photoProfilesArray = cameraOutputCap.photoProfiles; +if (!photoProfilesArray) { + console.error("createOutput photoProfilesArray == null || undefined") +} + +let videoProfilesArray = cameraOutputCap.videoProfiles; +if (!videoProfilesArray) { + console.error("createOutput videoProfilesArray == null || undefined") +} + +let metadataObjectTypesArray = cameraOutputCap.supportedMetadataObjectTypes; +if (!metadataObjectTypesArray) { + console.error("createOutput metadataObjectTypesArray == null || undefined") +} + +// Configure the parameters based on those supported by the hardware device. +let AVRecorderProfile = { + audioBitrate : 48000, + audioChannels : 2, + audioCodec : media.CodecMimeType.AUDIO_AAC, + audioSampleRate : 48000, + fileFormat : media.ContainerFormatType.CFT_MPEG_4, + videoBitrate : 2000000, + videoCodec : media.CodecMimeType.VIDEO_MPEG4, + videoFrameWidth : 640, + videoFrameHeight : 480, + videoFrameRate : 30 +} +let AVRecorderConfig = { + audioSourceType : media.AudioSourceType.AUDIO_SOURCE_TYPE_MIC, + videoSourceType : media.VideoSourceType.VIDEO_SOURCE_TYPE_SURFACE_YUV, + profile : AVRecorderProfile, + url : 'fd://', // Before passing in a file descriptor to this parameter, the file must be created by the caller and granted with the read and write permissions. Example value: eg.fd://45--file:///data/media/01.mp4. + rotation: 0, // The value can be 0, 90, 180, or 270. If any other value is used, prepare() reports an error. + location : { latitude : 30, longitude : 130 } +} + +let avRecorder +media.createAVRecorder((error, recorder) => { + if (recorder != null) { + avRecorder = recorder; + console.log('createAVRecorder success'); + } else { + console.log(`createAVRecorder fail, error:${error}`); + } +}); + +avRecorder.prepare(AVRecorderConfig, (err) => { + if (err == null) { + console.log('prepare success'); + } else { + console.log('prepare failed and error is ' + err.message); + } +}) + +let videoSurfaceId = null; // The surfaceID is passed in to the camera API to create a VideoOutput instance. +avRecorder.getInputSurface((err, surfaceId) => { + if (err == null) { + console.log('getInputSurface success'); + videoSurfaceId = surfaceId; + } else { + console.log('getInputSurface failed and error is ' + err.message); + } +}); + +// Create a VideoOutput instance. +let videoOutput +try { + videoOutput = cameraManager.createVideoOutput(videoProfilesArray[0], videoSurfaceId) +} catch (error) { + console.error('Failed to create the videoOutput instance. errorCode = ' + error.code); +} + +// Listen for video output errors. +videoOutput.on('error', (error) => { + console.log(`Preview output error code: ${error.code}`); +}) + +// Create a session. +let captureSession +try { + captureSession = cameraManager.createCaptureSession() +} catch (error) { + console.error('Failed to create the CaptureSession instance. errorCode = ' + error.code); +} + +// Listen for session errors. +captureSession.on('error', (error) => { + console.log(`Capture session error code: ${error.code}`); +}) + +// Start configuration for the session. +try { + captureSession.beginConfig() +} catch (error) { + console.error('Failed to beginConfig. errorCode = ' + error.code); +} + +// Obtain the camera list. +let cameraArray = cameraManager.getSupportedCameras(); +if (cameraArray.length <= 0) { + console.error("cameraManager.getSupportedCameras error") + return; +} + +// Create a camera input stream. +let cameraInput +try { + cameraInput = cameraManager.createCameraInput(cameraArray[0]); +} catch (error) { + console.error('Failed to createCameraInput errorCode = ' + error.code); +} + +// Listen for camera input errors. +let cameraDevice = cameraArray[0]; +cameraInput.on('error', cameraDevice, (error) => { + console.log(`Camera input error code: ${error.code}`); +}) + +// Open the camera. +await cameraInput.open(); + +// Add the camera input stream to the session. +try { + captureSession.addInput(cameraInput) +} catch (error) { + console.error('Failed to addInput. errorCode = ' + error.code); +} + +// Create a preview output stream. For details about the surfaceId parameter, see the XComponent. The preview stream is the surface provided by the XComponent. +let previewOutput +try { + previewOutput = cameraManager.createPreviewOutput(previewProfilesArray[0], surfaceId) +} catch (error) { + console.error("Failed to create the PreviewOutput instance.") +} + +// Add the preview input stream to the session. +try { + captureSession.addOutput(previewOutput) +} catch (error) { + console.error('Failed to addOutput(previewOutput). errorCode = ' + error.code); +} + +// Add a video output stream to the session. +try { + captureSession.addOutput(videoOutput) +} catch (error) { + console.error('Failed to addOutput(videoOutput). errorCode = ' + error.code); +} + +// Commit the session configuration. +await captureSession.commitConfig() + +// Start the session. +await captureSession.start().then(() => { + console.log('Promise returned to indicate the session start success.'); +}) + +// Start the video output stream. +videoOutput.start(async (err) => { + if (err) { + console.error('Failed to start the video output ${err.message}'); + return; + } + console.log('Callback invoked to indicate the video output start success.'); +}); + +// Start video recording. +avRecorder.start().then(() => { + console.log('videoRecorder start success'); +}) + +// Stop the video output stream. +videoOutput.stop((err) => { + if (err) { + console.error('Failed to stop the video output ${err.message}'); + return; + } + console.log('Callback invoked to indicate the video output stop success.'); +}); + +// Stop video recording. +avRecorder.stop().then(() => { + console.log('stop success'); +}) + +// Stop the session. +captureSession.stop() + +// Release the camera input stream. +cameraInput.close() + +// Release the preview output stream. +previewOutput.release() + +// Release the video output stream. +videoOutput.release() + +// Release the session. +captureSession.release() + +// Set the session to null. +captureSession = null +``` diff --git a/en/application-dev/media/camera-recording.md b/en/application-dev/media/camera-recording.md new file mode 100644 index 0000000000000000000000000000000000000000..421ff990bf45b372dd39cd3346e29b636f292762 --- /dev/null +++ b/en/application-dev/media/camera-recording.md @@ -0,0 +1,155 @@ +# Video Recording + +Video recording is also an important function of the camera application. Video recording is the process of cyclic capturing of frames. To smooth videos, you can follow step 4 in [Camera Photographing](camera-shooting.md) to set the resolution, flash, focal length, photo quality, and rotation angle. + +## How to Develop + +Read [Camera](../reference/apis/js-apis-camera.md) for the API reference. + +1. Import the media module. The [APIs](../reference/apis/js-apis-media.md) provided by this module are used to obtain the surface ID and create a photo output stream. + + ```ts + import media from '@ohos.multimedia.media'; + ``` + +2. Create a surface. + + Call **createAVRecorder()** of the media module to create an **AVRecorder** instance, and call **getInputSurface()** of the instance to obtain the surface ID, which is associated with the view output stream to process the data output by the stream. + + ```ts + let AVRecorder; + media.createAVRecorder((error, recorder) => { + if (recorder != null) { + AVRecorder = recorder; + console.info('createAVRecorder success'); + } else { + console.info(`createAVRecorder fail, error:${error}`); + } + }); + // For details about AVRecorderConfig, see the next section. + AVRecorder.prepare(AVRecorderConfig, (err) => { + if (err == null) { + console.log('prepare success'); + } else { + console.log('prepare failed and error is ' + err.message); + } + }) + + let videoSurfaceId = null; + AVRecorder.getInputSurface().then((surfaceId) => { + console.info('getInputSurface success'); + videoSurfaceId = surfaceId; + }).catch((err) => { + console.info('getInputSurface failed and catch error is ' + err.message); + }); + ``` + +3. Create a video output stream. + + Obtain the video output streams supported by the current device from **videoProfiles** in the **CameraOutputCapability** class. Then, define video recording parameters and use **createVideoOutput()** to create a video output stream. + + ```ts + let videoProfilesArray = cameraOutputCapability.videoProfiles; + if (!videoProfilesArray) { + console.error("createOutput videoProfilesArray == null || undefined"); + } + + // Define video recording parameters. + let videoConfig = { + videoSourceType: media.VideoSourceType.VIDEO_SOURCE_TYPE_SURFACE_YUV, + profile: { + fileFormat: media.ContainerFormatType.CFT_MPEG_4, // Video file encapsulation format. Only MP4 is supported. + videoBitrate: 100000, // Video bit rate. + videoCodec: media.CodecMimeType.VIDEO_MPEG4, // Video file encoding format. Both MPEG-4 and AVC are supported. + videoFrameWidth: 640, // Video frame width. + videoFrameHeight: 480, // Video frame height. + videoFrameRate: 30 // Video frame rate. + }, + url: 'fd://35', + rotation: 0 + } + // Create an AVRecorder instance. + let avRecorder; + media.createAVRecorder((error, recorder) => { + if (recorder != null) { + avRecorder = recorder; + console.info('createAVRecorder success'); + } else { + console.info(`createAVRecorder fail, error:${error}`); + } + }); + // Set video recording parameters. + avRecorder.prepare(videoConfig); + // Create a VideoOutput instance. + let videoOutput; + try { + videoOutput = cameraManager.createVideoOutput(videoProfilesArray[0], videoSurfaceId); + } catch (error) { + console.error('Failed to create the videoOutput instance. errorCode = ' + error.code); + } + ``` + +4. Start video recording. + + Call **start()** of the **VideoOutput** instance to start the video output stream, and then call **start()** of the **AVRecorder** instance to start recording. + + ``` + videoOutput.start(async (err) => { + if (err) { + console.error('Failed to start the video output ${err.message}'); + return; + } + console.info('Callback invoked to indicate the video output start success.'); + }); + + avRecorder.start().then(() => { + console.info('avRecorder start success'); + } + ``` + +5. Stop video recording. + + Call **stop()** of the **AVRecorder** instance to stop recording, and then call **stop()** of the **VideoOutput** instance to stop the video output stream. + + ```ts + videoRecorder.stop().then(() => { + console.info('stop success'); + } + + videoOutput.stop((err) => { + if (err) { + console.error('Failed to stop the video output ${err.message}'); + return; + } + console.info('Callback invoked to indicate the video output stop success.'); + }); + ``` + + +## Status Listening + +During camera application development, you can listen for the status of the video output stream, including recording start, recording end, and recording stream output errors. + +- Register the 'frameStart' event to listen for recording start events. This event can be registered when a **VideoOutput** object is created and is triggered when the bottom layer starts exposure for recording for the first time. Video recording is started as long as a result is returned. + + ```ts + videoOutput.on('frameStart', () => { + console.info('Video frame started'); + }) + ``` + +- Register the 'frameEnd' event to listen for recording end events. This event can be registered when a **VideoOutput** object is created and is triggered when the last frame of recording ends. Video recording ends as long as a result is returned. + + ```ts + videoOutput.on('frameEnd', () => { + console.info('Video frame ended'); + }) + ``` + +- Register the 'error' event to listen for video output errors. The callback function returns an error code when an API is incorrectly used. For details about the error code types, see [Camera Error Codes](../reference/apis/js-apis-camera.md#cameraerrorcode). + + ```ts + videoOutput.on('error', (error) => { + console.info(`Video output error code: ${error.code}`); + }) + ``` diff --git a/en/application-dev/media/camera-session-management.md b/en/application-dev/media/camera-session-management.md new file mode 100644 index 0000000000000000000000000000000000000000..1d0d2fcfe20428d33d72569cbf2212b830ad42e2 --- /dev/null +++ b/en/application-dev/media/camera-session-management.md @@ -0,0 +1,86 @@ +# Camera Session Management + +Before using the camera application for preview, photographing, video recording, and metadata, you must create a camera session. + +You can implement the following functions in the session: + +- Configure the camera input and output streams. This is mandatory for photographing. + Configuring an input stream is to add a device input, which means that the user selects a camera for photographing. Configuring an output stream is to select a data output mode. For example, to implement photographing, you must configure both the preview stream and photo stream as the output stream. The data of the preview stream is displayed on the XComponent, and that of the photo stream is saved to the Gallery application through the **ImageReceiver** API. + +- Perform more operations on the camera hardware. For example, add the flash and adjust the focal length. For details about the supported configurations and APIs, see [Camera API Reference](../reference/apis/js-apis-camera.md). + +- Control session switching. The application can switch the camera mode by removing and adding output streams. For example, to switch from photographing to video recording, the application must remove the photo output stream and add the video output stream. + +After the session configuration is complete, the application must commit the configuration and start the session before using the camera functionalities. + +## How to Develop + +1. Call **createCaptureSession()** in the **CameraManager** class to create a session. + + ```ts + let captureSession; + try { + captureSession = cameraManager.createCaptureSession(); + } catch (error) { + console.error('Failed to create the CaptureSession instance. errorCode = ' + error.code); + } + ``` + +2. Call **beginConfig()** in the **CaptureSession** class to start configuration for the session. + + ```ts + try { + captureSession.beginConfig(); + } catch (error) { + console.error('Failed to beginConfig. errorCode = ' + error.code); + } + ``` + +3. Configure the session. You can call **addInput()** and **addOutput()** in the **CaptureSession** class to add the input and output streams to the session, respectively. The code snippet below uses adding the preview stream **previewOutput** and photo stream **photoOutput** as an example to implement the photographing and preview mode. + + After the configuration, call **commitConfig()** and **start()** in the **CaptureSession** class in sequence to commit the configuration and start the session. + + ```ts + try { + captureSession.addInput(cameraInput); + } catch (error) { + console.error('Failed to addInput. errorCode = ' + error.code); + } + try { + captureSession.addOutput(previewOutput); + } catch (error) { + console.error('Failed to addOutput(previewOutput). errorCode = ' + error.code); + } + try { + captureSession.addOutput(photoOutput); + } catch (error) { + console.error('Failed to addOutput(photoOutput). errorCode = ' + error.code); + } + await captureSession.commitConfig() ; + await captureSession.start().then(() => { + console.info('Promise returned to indicate the session start success.'); + }) + ``` + +4. Control the session. You can call **stop()** in the **CaptureSession** class to stop the session, and call **removeOutput()** and **addOutput()** in this class to switch to another session. The code snippet below uses removing the photo stream **photoOutput** and adding the video stream **videoOutput** as an example to complete the switching from photographing to recording. + + ```ts + await captureSession.stop(); + try { + captureSession.beginConfig(); + } catch (error) { + console.error('Failed to beginConfig. errorCode = ' + error.code); + } + // Remove the photo output stream from the session. + try { + captureSession.removeOutput(photoOutput); + } catch (error) { + console.error('Failed to removeOutput(photoOutput). errorCode = ' + error.code); + } + // Add the video output stream to the session. + try { + captureSession.addOutput(videoOutput); + } catch (error) { + console.error('Failed to addOutput(videoOutput). errorCode = ' + error.code); + } + ``` diff --git a/en/application-dev/media/camera-shooting-case.md b/en/application-dev/media/camera-shooting-case.md new file mode 100644 index 0000000000000000000000000000000000000000..da2588b10b844fd2a9432da909d1d387b8193d9f --- /dev/null +++ b/en/application-dev/media/camera-shooting-case.md @@ -0,0 +1,239 @@ +# Camera Photographing Sample + +## Development Process + +After obtaining the output stream capabilities supported by the camera, create a photo stream. The development process is as follows: + +![Photographing Development Process](figures/photographing-development-process.png) + +## Sample Code + +```ts +import camera from '@ohos.multimedia.camera' +import image from '@ohos.multimedia.image' +import media from '@ohos.multimedia.media' + +// Create a CameraManager instance. +context: any = getContext(this) +let cameraManager = camera.getCameraManager(this.context) +if (!cameraManager) { + console.error("camera.getCameraManager error") + return; +} +// Listen for camera status changes. +cameraManager.on('cameraStatus', (cameraStatusInfo) => { + console.info(`camera : ${cameraStatusInfo.camera.cameraId}`); + console.info(`status: ${cameraStatusInfo.status}`); +}) + +// Obtain the camera list. +let cameraArray = cameraManager.getSupportedCameras(); +if (cameraArray.length <= 0) { + console.error("cameraManager.getSupportedCameras error") + return; +} + +for (let index = 0; index < cameraArray.length; index++) { + console.info('cameraId : ' + cameraArray[index].cameraId); // Obtain the camera ID. + console.info('cameraPosition : ' + cameraArray[index].cameraPosition); // Obtain the camera position. + console.info('cameraType : ' + cameraArray[index].cameraType); // Obtain the camera type. + console.info('connectionType : ' + cameraArray[index].connectionType); // Obtain the camera connection type. +} + +// Create a camera input stream. +let cameraInput +try { + cameraInput = cameraManager.createCameraInput(cameraArray[0]); +} catch (error) { + console.error('Failed to createCameraInput errorCode = ' + error.code); +} + +// Listen for camera input errors. +let cameraDevice = cameraArray[0]; +cameraInput.on('error', cameraDevice, (error) => { + console.info(`Camera input error code: ${error.code}`); +}) + +// Open the camera. +await cameraInput.open(); + +// Obtain the output stream capabilities supported by the camera. +let cameraOutputCap = cameraManager.getSupportedOutputCapability(cameraArray[0]); +if (!cameraOutputCap) { + console.error("cameraManager.getSupportedOutputCapability error") + return; +} +console.info("outputCapability: " + JSON.stringify(cameraOutputCap)); + +let previewProfilesArray = cameraOutputCap.previewProfiles; +if (!previewProfilesArray) { + console.error("createOutput previewProfilesArray == null || undefined") +} + +let photoProfilesArray = cameraOutputCap.photoProfiles; +if (!photoProfilesArray) { + console.error("createOutput photoProfilesArray == null || undefined") +} + +// Create a preview output stream. For details about the surfaceId parameter, see the XComponent. The preview stream is the surface provided by the XComponent. +let previewOutput +try { + previewOutput = cameraManager.createPreviewOutput(previewProfilesArray[0], surfaceId) +} catch (error) { + console.error("Failed to create the PreviewOutput instance.") +} + +// Listen for preview output errors. +previewOutput.on('error', (error) => { + console.info(`Preview output error code: ${error.code}`); +}) + +// Create an ImageReceiver instance and set photographing parameters. Wherein, the resolution must be one of the photographing resolutions supported by the current device, which are obtained by photoProfilesArray. +let imageReceiver = await image.createImageReceiver(1920, 1080, 4, 8) +// Obtain the surface ID for displaying the photos. +let photoSurfaceId = await imageReceiver.getReceivingSurfaceId() +// Create a photo output stream. +let photoOutput +try { + photoOutput = cameraManager.createPhotoOutput(photoProfilesArray[0], photoSurfaceId) +} catch (error) { + console.error('Failed to createPhotoOutput errorCode = ' + error.code); +} +// Create a session. +let captureSession +try { + captureSession = cameraManager.createCaptureSession() +} catch (error) { + console.error('Failed to create the CaptureSession instance. errorCode = ' + error.code); +} + +// Listen for session errors. +captureSession.on('error', (error) => { + console.info(`Capture session error code: ${error.code}`); +}) + +// Start configuration for the session. +try { + captureSession.beginConfig() +} catch (error) { + console.error('Failed to beginConfig. errorCode = ' + error.code); +} + +// Add the camera input stream to the session. +try { + captureSession.addInput(cameraInput) +} catch (error) { + console.error('Failed to addInput. errorCode = ' + error.code); +} + +// Add the preview output stream to the session. +try { + captureSession.addOutput(previewOutput) +} catch (error) { + console.error('Failed to addOutput(previewOutput). errorCode = ' + error.code); +} + +// Add the photo output stream to the session. +try { + captureSession.addOutput(photoOutput) +} catch (error) { + console.error('Failed to addOutput(photoOutput). errorCode = ' + error.code); +} + +// Commit the session configuration. +await captureSession.commitConfig() + +// Start the session. +await captureSession.start().then(() => { + console.info('Promise returned to indicate the session start success.'); +}) +// Check whether the camera has flash. +let flashStatus +try { + flashStatus = captureSession.hasFlash() +} catch (error) { + console.error('Failed to hasFlash. errorCode = ' + error.code); +} +console.info('Promise returned with the flash light support status:' + flashStatus); + +if (flashStatus) { + // Check whether the auto flash mode is supported. + let flashModeStatus + try { + let status = captureSession.isFlashModeSupported(camera.FlashMode.FLASH_MODE_AUTO) + flashModeStatus = status + } catch (error) { + console.error('Failed to check whether the flash mode is supported. errorCode = ' + error.code); + } + if(flashModeStatus) { + // Set the flash mode to auto. + try { + captureSession.setFlashMode(camera.FlashMode.FLASH_MODE_AUTO) + } catch (error) { + console.error('Failed to set the flash mode. errorCode = ' + error.code); + } + } +} + +// Check whether the continuous auto focus is supported. +let focusModeStatus +try { + let status = captureSession.isFocusModeSupported(camera.FocusMode.FOCUS_MODE_CONTINUOUS_AUTO) + focusModeStatus = status +} catch (error) { + console.error('Failed to check whether the focus mode is supported. errorCode = ' + error.code); +} + +if (focusModeStatus) { + // Set the focus mode to continuous auto focus. + try { + captureSession.setFocusMode(camera.FocusMode.FOCUS_MODE_CONTINUOUS_AUTO) + } catch (error) { + console.error('Failed to set the focus mode. errorCode = ' + error.code); + } +} + +// Obtain the zoom ratio range supported by the camera. +let zoomRatioRange +try { + zoomRatioRange = captureSession.getZoomRatioRange() +} catch (error) { + console.error('Failed to get the zoom ratio range. errorCode = ' + error.code); +} + +// Set a zoom ratio. +try { + captureSession.setZoomRatio(zoomRatioRange[0]) +} catch (error) { + console.error('Failed to set the zoom ratio value. errorCode = ' + error.code); +} +let settings = { + quality: camera.QualityLevel.QUALITY_LEVEL_HIGH, // Set the photo quality to high. + rotation: camera.ImageRotation.ROTATION_0 // Set the rotation angle of the photo to 0. +} +// Use the current photographing settings to take photos. +photoOutput.capture(settings, async (err) => { + if (err) { + console.error('Failed to capture the photo ${err.message}'); + return; + } + console.info('Callback invoked to indicate the photo capture request success.'); +}); +// Stop the session. +captureSession.stop() + +// Release the camera input stream. +cameraInput.close() + +// Release the preview output stream. +previewOutput.release() + +// Release the photo output stream. +photoOutput.release() + +// Release the session. +captureSession.release() + +// Set the session to null. +captureSession = null +``` diff --git a/en/application-dev/media/camera-shooting.md b/en/application-dev/media/camera-shooting.md new file mode 100644 index 0000000000000000000000000000000000000000..9026267ebc0a6950ced6b5092ce88e8ed31d2e24 --- /dev/null +++ b/en/application-dev/media/camera-shooting.md @@ -0,0 +1,159 @@ +# Camera Photographing + +Photographing is an important function of the camera application. Based on the complex logic of the camera hardware, the camera module provides APIs for you to set information such as resolution, flash, focal length, photo quality, and rotation angle. + +## How to Develop + +Read [Camera](../reference/apis/js-apis-camera.md) for the API reference. + +1. Import the image module. The APIs provided by this module are used to obtain the surface ID and create a photo output stream. + + ```ts + import image from '@ohos.multimedia.image'; + ``` + +2. Obtain the surface ID. + + Call **createImageReceiver()** of the image module to create an **ImageReceiver** instance, and use **getReceivingSurfaceId()** of the instance to obtain the surface ID, which is associated with the photo output stream to process the data output by the stream. + + ```ts + function getImageReceiverSurfaceId() { + let receiver = image.createImageReceiver(640, 480, 4, 8); + console.info('before ImageReceiver check'); + if (receiver !== undefined) { + console.info('ImageReceiver is ok'); + let photoSurfaceId = receiver.getReceivingSurfaceId(); + console.info('ImageReceived id: ' + JSON.stringify(photoSurfaceId)); + } else { + console.info('ImageReceiver is not ok'); + } + } + ``` + +3. Create a photo output stream. + + Obtain the photo output streams supported by the current device from **photoProfiles** in **CameraOutputCapability**, and then call **createPhotoOutput()** to pass in a supported output stream and the surface ID obtained in step 1 to create a photo output stream. + + ```ts + let photoProfilesArray = cameraOutputCapability.photoProfiles; + if (!photoProfilesArray) { + console.error("createOutput photoProfilesArray == null || undefined"); + } + let photoOutput; + try { + photoOutput = cameraManager.createPhotoOutput(photoProfilesArray[0], photoSurfaceId); + } catch (error) { + console.error('Failed to createPhotoOutput errorCode = ' + error.code); + } + ``` + +4. Set camera parameters. + + You can set camera parameters to adjust photographing functions, including the flash, zoom ratio, and focal length. + + ```ts + // Check whether the camera has flash. + let flashStatus; + try { + flashStatus = captureSession.hasFlash(); + } catch (error) { + console.error('Failed to hasFlash. errorCode = ' + error.code); + } + console.info('Promise returned with the flash light support status:' + flashStatus); + if (flashStatus) { + // Check whether the auto flash mode is supported. + let flashModeStatus; + try { + let status = captureSession.isFlashModeSupported(camera.FlashMode.FLASH_MODE_AUTO); + flashModeStatus = status; + } catch (error) { + console.error('Failed to check whether the flash mode is supported. errorCode = ' + error.code); + } + if(flashModeStatus) { + // Set the flash mode to auto. + try { + captureSession.setFlashMode(camera.FlashMode.FLASH_MODE_AUTO); + } catch (error) { + console.error('Failed to set the flash mode. errorCode = ' + error.code); + } + } + } + // Check whether the continuous auto focus is supported. + let focusModeStatus; + try { + let status = captureSession.isFocusModeSupported(camera.FocusMode.FOCUS_MODE_CONTINUOUS_AUTO); + focusModeStatus = status; + } catch (error) { + console.error('Failed to check whether the focus mode is supported. errorCode = ' + error.code); + } + if (focusModeStatus) { + // Set the focus mode to continuous auto focus. + try { + captureSession.setFocusMode(camera.FocusMode.FOCUS_MODE_CONTINUOUS_AUTO); + } catch (error) { + console.error('Failed to set the focus mode. errorCode = ' + error.code); + } + } + // Obtain the zoom ratio range supported by the camera. + let zoomRatioRange; + try { + zoomRatioRange = captureSession.getZoomRatioRange(); + } catch (error) { + console.error('Failed to get the zoom ratio range. errorCode = ' + error.code); + } + // Set a zoom ratio. + try { + captureSession.setZoomRatio(zoomRatioRange[0]); + } catch (error) { + console.error('Failed to set the zoom ratio value. errorCode = ' + error.code); + } + ``` + +5. Trigger photographing. + + Call **capture()** in the **PhotoOutput** class to capture a photo. In this API, the first parameter specifies the settings (for example, photo quality and rotation angle) for photographing, and the second parameter is a callback function. + + ```ts + let settings = { + quality: camera.QualityLevel.QUALITY_LEVEL_HIGH, // Set the photo quality to high. + rotation: camera.ImageRotation.ROTATION_0, // Set the rotation angle of the photo to 0. + location: captureLocation, // Set the geolocation information of the photo. + mirror: false // Disable mirroring (disabled by default). + }; + photoOutput.capture(settings, async (err) => { + if (err) { + console.error('Failed to capture the photo ${err.message}'); + return; + } + console.info('Callback invoked to indicate the photo capture request success.'); + }); + ``` + +## Status Listening + +During camera application development, you can listen for the status of the photo output stream, including the start of the photo stream, the start and end of the photo frame, and the errors of the photo output stream. + +- Register the 'captureStart' event to listen for photographing start events. This event can be registered when a **PhotoOutput** object is created and is triggered when the bottom layer starts exposure for photographing for the first time. The capture ID is returned. + + ```ts + photoOutput.on('captureStart', (captureId) => { + console.info(`photo capture stated, captureId : ${captureId}`); + }) + ``` + +- Register the 'captureEnd' event to listen for photographing end events. This event can be registered when a **PhotoOutput** object is created and is triggered when the photographing is complete. [CaptureEndInfo](../reference/apis/js-apis-camera.md#captureendinfo) is returned. + + ```ts + photoOutput.on('captureEnd', (captureEndInfo) => { + console.info(`photo capture end, captureId : ${captureEndInfo.captureId}`); + console.info(`frameCount : ${captureEndInfo.frameCount}`); + }) + ``` + +- Register the 'error' event to listen for photo output errors. The callback function returns an error code when an API is incorrectly used. For details about the error code types, see [Camera Error Codes](../reference/apis/js-apis-camera.md#cameraerrorcode). + + ```ts + photoOutput.on('error', (error) => { + console.info(`Photo output error code: ${error.code}`); + }) + ``` diff --git a/en/application-dev/media/camera.md b/en/application-dev/media/camera.md deleted file mode 100644 index 0622db9c3ce6d962001b47ca6d2e6d1bc2aaff7c..0000000000000000000000000000000000000000 --- a/en/application-dev/media/camera.md +++ /dev/null @@ -1,511 +0,0 @@ -# Camera Development - -## When to Use - -With the APIs provided by the **Camera** module, you can access and operate camera devices and develop new functions. Common operations include preview, photographing, and video recording. You can also implement flash control, exposure time control, focus mode control, zoom control, and much more. - -Before calling camera APIs, be familiar with the following concepts: - -- **Static camera capabilities**: A series of parameters used to describe inherent capabilities of a camera, such as orientation and supported resolution. -- **Physical camera**: An independent camera device. The physical camera ID is a string that uniquely identifies a physical camera. -- **Asynchronous operation**: A non-blocking operation that allows other operations to execute before it completes. To prevent the UI thread from being blocked, some **Camera** calls are asynchronous. Each asynchronous API provides the callback and promise functions. - -## How to Develop - -### Available APIs - -For details about the APIs, see [Camera Management](../reference/apis/js-apis-camera.md). - -### Full-Process Scenario - -The full process includes applying for permissions, creating an instance, setting parameters, managing sessions, taking photos, recording videos, and releasing resources. - -#### Applying for Permissions - -You must apply for the permissions for your application to access the camera device and other functions. The following table lists camera-related permissions. - -| Permission| Attribute Value | -| -------- | ------------------------------ | -| Camera| ohos.permission.CAMERA | -| Call recording| ohos.permission.MICROPHONE | -| Storage| ohos.permission.WRITE_MEDIA | -| Read| ohos.permission.READ_MEDIA | -| Location| ohos.permission.MEDIA_LOCATION | - -The code snippet is as follows: - -```typescript -const PERMISSIONS: Array = [ - 'ohos.permission.CAMERA', - 'ohos.permission.MICROPHONE', - 'ohos.permission.MEDIA_LOCATION', - 'ohos.permission.READ_MEDIA', - 'ohos.permission.WRITE_MEDIA' -] - -function applyPermission() { - console.info('[permission] get permission'); - globalThis.abilityContext.requestPermissionFromUser(PERMISSIONS) - } -``` - -#### Creating an Instance - -You must create an independent **CameraManager** instance before performing camera operations. If this operation fails, the camera may be occupied or unusable. If the camera is occupied, wait until it is released. You can call **getSupportedCameras()** to obtain the list of cameras supported by the current device. The list stores all camera IDs of the current device. Each of these IDs can be used to create an independent **CameraManager** instance. If the list is empty, no camera is available for the current device and subsequent operations cannot be performed. The camera has preview, shooting, video recording, and metadata output streams. You can use **getSupportedOutputCapability()** to obtain the output stream capabilities of the camera and configure them in the **profile** field in **CameraOutputCapability**. The procedure for creating a **CameraManager** instance is as follows: - -```typescript -import camera from '@ohos.multimedia.camera' -import image from '@ohos.multimedia.image' -import media from '@ohos.multimedia.media' - -// Create a CameraManager instance. -context: any = getContext(this) -let cameraManager = camera.getCameraManager(this.context) -if (!cameraManager) { - console.error("camera.getCameraManager error") - return; -} -// Listen for camera state changes. -cameraManager.on('cameraStatus', (cameraStatusInfo) => { - console.log(`camera : ${cameraStatusInfo.camera.cameraId}`); - console.log(`status: ${cameraStatusInfo.status}`); -}) - -// Obtain the camera list. -let cameraArray = cameraManager.getSupportedCameras(); -if (cameraArray.length <= 0) { - console.error("cameraManager.getSupportedCameras error") - return; -} - -for (let index = 0; index < cameraArray.length; index++) { - console.log('cameraId : ' + cameraArray[index].cameraId); // Obtain the camera ID. - console.log('cameraPosition : ' + cameraArray[index].cameraPosition); // Obtain the camera position. - console.log('cameraType : ' + cameraArray[index].cameraType); // Obtain the camera type. - console.log('connectionType : ' + cameraArray[index].connectionType); // Obtain the camera connection type. -} - -// Create a camera input stream. -let cameraInput -try { - cameraInput = cameraManager.createCameraInput(cameraArray[0]); -} catch () { - console.error('Failed to createCameraInput errorCode = ' + error.code); -} - -// Listen for CameraInput errors. -let cameraDevice = cameraArray[0]; -cameraInput.on('error', cameraDevice, (error) => { - console.log(`Camera input error code: ${error.code}`); -}) - -// Open the camera. -await cameraInput.open(); - -// Obtain the output stream capabilities supported by the camera. -let cameraOutputCap = cameraManager.getSupportedOutputCapability(cameraArray[0]); -if (!cameraOutputCap) { - console.error("cameraManager.getSupportedOutputCapability error") - return; -} -console.info("outputCapability: " + JSON.stringify(cameraOutputCap)); - -let previewProfilesArray = cameraOutputCap.previewProfiles; -if (!previewProfilesArray) { - console.error("createOutput previewProfilesArray == null || undefined") -} - -let photoProfilesArray = cameraOutputCap.photoProfiles; -if (!photoProfilesArray) { - console.error("createOutput photoProfilesArray == null || undefined") -} - -let videoProfilesArray = cameraOutputCap.videoProfiles; -if (!videoProfilesArray) { - console.error("createOutput videoProfilesArray == null || undefined") -} - -let metadataObjectTypesArray = cameraOutputCap.supportedMetadataObjectTypes; -if (!metadataObjectTypesArray) { - console.error("createOutput metadataObjectTypesArray == null || undefined") -} - -// Create a preview stream. For details about the surfaceId parameter, see the XComponent section. The preview stream is the surface provided by the XComponent. -let previewOutput -try { - previewOutput = cameraManager.createPreviewOutput(previewProfilesArray[0], surfaceId) -} catch (error) { - console.error("Failed to create the PreviewOutput instance.") -} - -// Listen for PreviewOutput errors. -previewOutput.on('error', (error) => { - console.log(`Preview output error code: ${error.code}`); -}) - -// Create an ImageReceiver instance and set photo parameters. Wherein, the resolution must be one of the photographing resolutions supported by the current device, which are obtained by photoProfilesArray. -let imageReceiver = await image.createImageReceiver(1920, 1080, 4, 8) -// Obtain the surface ID for displaying the photos. -let photoSurfaceId = await imageReceiver.getReceivingSurfaceId() -// Create a photographing output stream. -let photoOutput -try { - photoOutput = cameraManager.createPhotoOutput(photoProfilesArray[0], photoSurfaceId) -} catch (error) { - console.error('Failed to createPhotoOutput errorCode = ' + error.code); -} - -// Define video recording parameters. -let videoConfig = { - audioSourceType: 1, - videoSourceType: 1, - profile: { - audioBitrate: 48000, - audioChannels: 2, - audioCodec: 'audio/mp4v-es', - audioSampleRate: 48000, - durationTime: 1000, - fileFormat: 'mp4', - videoBitrate: 48000, - videoCodec: 'video/mp4v-es', - videoFrameWidth: 640, - videoFrameHeight: 480, - videoFrameRate: 30 - }, - url: 'file:///data/media/01.mp4', - orientationHint: 0, - maxSize: 100, - maxDuration: 500, - rotation: 0 -} - -// Create a video recording output stream. -let videoRecorder -media.createVideoRecorder().then((recorder) => { - console.log('createVideoRecorder called') - videoRecorder = recorder -}) -// Set video recording parameters. -videoRecorder.prepare(videoConfig) -// Obtain the surface ID for video recording. -let videoSurfaceId -videoRecorder.getInputSurface().then((id) => { - console.log('getInputSurface called') - videoSurfaceId = id -}) - -// Create a VideoOutput instance. -let videoOutput -try { - videoOutput = cameraManager.createVideoOutput(videoProfilesArray[0], videoSurfaceId) -} catch (error) { - console.error('Failed to create the videoOutput instance. errorCode = ' + error.code); -} - -// Listen for VideoOutput errors. -videoOutput.on('error', (error) => { - console.log(`Preview output error code: ${error.code}`); -}) -``` -Surfaces must be created in advance for the preview, shooting, and video recording stream. The preview stream is the surface provided by the **XComponent**, the shooting stream is the surface provided by **ImageReceiver**, and the video recording stream is the surface provided by **VideoRecorder**. - -**XComponent** - -```typescript -mXComponentController: XComponentController = new XComponentController // Create an XComponentController. - -build() { - Flex() { - XComponent({ // Create an XComponent. - id: '', - type: 'surface', - libraryname: '', - controller: this.mXComponentController - }) - .onload(() => { // Set the onload callback. - // Set the surface width and height (1920 x 1080). For details about how to set the preview size, see the preview resolutions supported by the current device, which are obtained by previewProfilesArray. - this.mXComponentController.setXComponentSurfaceSize({surfaceWidth:1920,surfaceHeight:1080}) - // Obtain the surface ID. - globalThis.surfaceId = mXComponentController.getXComponentSurfaceId() - }) - .width('1920px') // Set the width of the XComponent. - .height('1080px') // Set the height of the XComponent. - } -} -``` - -**ImageReceiver** - -```typescript -function getImageReceiverSurfaceId() { - let receiver = image.createImageReceiver(640, 480, 4, 8) - console.log(TAG + 'before ImageReceiver check') - if (receiver !== undefined) { - console.log('ImageReceiver is ok') - surfaceId1 = receiver.getReceivingSurfaceId() - console.log('ImageReceived id: ' + JSON.stringify(surfaceId1)) - } else { - console.log('ImageReceiver is not ok') - } - } -``` - -**VideoRecorder** - -```typescript -function getVideoRecorderSurface() { - await getFd('CameraManager.mp4'); - mVideoConfig.url = mFdPath; - media.createVideoRecorder((err, recorder) => { - console.info('Entering create video receiver') - mVideoRecorder = recorder - console.info('videoRecorder is :' + JSON.stringify(mVideoRecorder)) - console.info('videoRecorder.prepare called.') - mVideoRecorder.prepare(mVideoConfig, (err) => { - console.info('videoRecorder.prepare success.') - mVideoRecorder.getInputSurface((err, id) => { - console.info('getInputSurface called') - mVideoSurface = id - console.info('getInputSurface surfaceId: ' + JSON.stringify(mVideoSurface)) - }) - }) - }) - } -``` - -#### Managing Sessions - -##### Creating a Session - -```typescript -// Create a session. -let captureSession -try { - captureSession = cameraManager.createCaptureSession() -} catch (error) { - console.error('Failed to create the CaptureSession instance. errorCode = ' + error.code); -} - -// Listen for session errors. -captureSession.on('error', (error) => { - console.log(`Capture session error code: ${error.code}`); -}) - -// Start configuration for the session. -try { - captureSession.beginConfig() -} catch (error) { - console.error('Failed to beginConfig. errorCode = ' + error.code); -} - -// Add the camera input stream to the session. -try { - captureSession.addInput(cameraInput) -} catch (error) { - console.error('Failed to addInput. errorCode = ' + error.code); -} - -// Add the preview input stream to the session. -try { - captureSession.addOutput(previewOutput) -} catch (error) { - console.error('Failed to addOutput(previewOutput). errorCode = ' + error.code); -} - -// Add the photographing output stream to the session. -try { - captureSession.addOutput(photoOutput) -} catch (error) { - console.error('Failed to addOutput(photoOutput). errorCode = ' + error.code); -} - -// Commit the session configuration. -await captureSession.commitConfig() - -// Start the session. -await captureSession.start().then(() => { - console.log('Promise returned to indicate the session start success.'); -}) -``` - -##### Switching a Session - -```typescript -// Stop the session. -await captureSession.stop() - -// Start configuration for the session. -try { - captureSession.beginConfig() -} catch (error) { - console.error('Failed to beginConfig. errorCode = ' + error.code); -} - -// Remove the photographing output stream from the session. -try { - captureSession.removeOutput(photoOutput) -} catch (error) { - console.error('Failed to removeOutput(photoOutput). errorCode = ' + error.code); -} - -// Add a video recording output stream to the session. -try { - captureSession.addOutput(videoOutput) -} catch (error) { - console.error('Failed to addOutput(videoOutput). errorCode = ' + error.code); -} - -// Commit the session configuration. -await captureSession.commitConfig() - -// Start the session. -await captureSession.start().then(() => { - console.log('Promise returned to indicate the session start success.'); -}) -``` - -#### Setting Parameters - -```typescript -// Check whether the camera has flash. -let flashStatus -try { - flashStatus = captureSession.hasFlash() -} catch (error) { - console.error('Failed to hasFlash. errorCode = ' + error.code); -} -console.log('Promise returned with the flash light support status:' + flashStatus); - -if (flashStatus) { - // Check whether the auto flash mode is supported. - let flashModeStatus - try { - let status = captureSession.isFlashModeSupported(camera.FlashMode.FLASH_MODE_AUTO) - flashModeStatus = status - } catch (error) { - console.error('Failed to check whether the flash mode is supported. errorCode = ' + error.code); - } - if(flashModeStatus) { - // Set the flash mode to auto. - try { - captureSession.setFlashMode(camera.FlashMode.FLASH_MODE_AUTO) - } catch (error) { - console.error('Failed to set the flash mode. errorCode = ' + error.code); - } - } -} - -// Check whether the continuous auto focus is supported. -let focusModeStatus -try { - let status = captureSession.isFocusModeSupported(camera.FocusMode.FOCUS_MODE_CONTINUOUS_AUTO) - focusModeStatus = status -} catch (error) { - console.error('Failed to check whether the focus mode is supported. errorCode = ' + error.code); -} - -if (focusModeStatus) { - // Set the focus mode to continuous auto focus. - try { - captureSession.setFocusMode(camera.FocusMode.FOCUS_MODE_CONTINUOUS_AUTO) - } catch (error) { - console.error('Failed to set the focus mode. errorCode = ' + error.code); - } -} - -// Obtain the zoom ratio range supported by the camera. -let zoomRatioRange -try { - zoomRatioRange = captureSession.getZoomRatioRange() -} catch (error) { - console.error('Failed to get the zoom ratio range. errorCode = ' + error.code); -} - -// Set a zoom ratio. -try { - captureSession.setZoomRatio(zoomRatioRange[0]) -} catch (error) { - console.error('Failed to set the zoom ratio value. errorCode = ' + error.code); -} -``` - -#### Taking Photos - -```typescript -let settings = { - quality: camera.QualityLevel.QUALITY_LEVEL_HIGH, // Set the image quality to high. - rotation: camera.ImageRotation.ROTATION_0 // Set the image rotation angle to 0. -} -// Use the current photographing settings to take photos. -photoOutput.capture(settings, async (err) => { - if (err) { - console.error('Failed to capture the photo ${err.message}'); - return; - } - console.log('Callback invoked to indicate the photo capture request success.'); -}); -``` - -#### Recording Videos - -```typescript -// Start the video recording output stream. -videoOutput.start(async (err) => { - if (err) { - console.error('Failed to start the video output ${err.message}'); - return; - } - console.log('Callback invoked to indicate the video output start success.'); -}); - -// Start video recording. -videoRecorder.start().then(() => { - console.info('videoRecorder start success'); -} - -// Stop video recording. -videoRecorder.stop().then(() => { - console.info('stop success'); -} - -// Stop the video recording output stream. -videoOutput.stop((err) => { - if (err) { - console.error('Failed to stop the video output ${err.message}'); - return; - } - console.log('Callback invoked to indicate the video output stop success.'); -}); -``` - -For details about the APIs used for saving photos, see [Image Processing](image.md#using-imagereceiver). - -#### Releasing Resources - -```typescript -// Stop the session. -captureSession.stop() - -// Release the camera input stream. -cameraInput.close() - -// Release the preview output stream. -previewOutput.release() - -// Release the photographing output stream. -photoOutput.release() - -// Release the video recording output stream. -videoOutput.release() - -// Release the session. -captureSession.release() - -// Set the session to null. -captureSession = null -``` - -## Process Flowchart - -The following figure shows the process of using the camera. -![camera_framework process](figures/camera_framework_process.png) diff --git a/en/application-dev/media/distributed-audio-playback.md b/en/application-dev/media/distributed-audio-playback.md new file mode 100644 index 0000000000000000000000000000000000000000..c56420de740e545168d009b5c743f2790146c475 --- /dev/null +++ b/en/application-dev/media/distributed-audio-playback.md @@ -0,0 +1,101 @@ +# Distributed Audio Playback (for System Applications Only) + +Distributed audio playback enables an application to continue audio playback on another device in the same network. + +You can use distributed audio playback to transfer all audio streams or the specified audio stream being played on the current device to a remote device. + +## How to Develop + +Before continuing audio playback on another device in the same network, you must obtain the device list on the network and listen for device connection state changes. For details, see [Audio Output Device Management](audio-output-device-management.md). + +When obtaining the device list on the network, you can specify **DeviceFlag** to filter out the required devices. + +| Name| Description| +| -------- | -------- | +| NONE_DEVICES_FLAG9+ | None. This is a system API.| +| OUTPUT_DEVICES_FLAG | Local output device.| +| INPUT_DEVICES_FLAG | Local input device.| +| ALL_DEVICES_FLAG | Local input and output device.| +| DISTRIBUTED_OUTPUT_DEVICES_FLAG9+ | Remote output device. This is a system API.| +| DISTRIBUTED_INPUT_DEVICES_FLAG9+ | Remote input device. This is a system API.| +| ALL_DISTRIBUTED_DEVICES_FLAG9+ | Remote input and output device. This is a system API.| + +For details about the API reference, see [AudioRoutingManager](../reference/apis/js-apis-audio.md#audioroutingmanager9). + +### Continuing the Playing of All Audio Streams + +1. [Obtain the output device information](audio-output-device-management.md#obtaining-output-device-information). + +2. Create an **AudioDeviceDescriptor** instance to describe an audio output device. + +3. Call **selectOutputDevice** to select a remote device, on which all the audio streams will continue playing. + +```ts +let outputAudioDeviceDescriptor = [{ + deviceRole: audio.DeviceRole.OUTPUT_DEVICE, + deviceType: audio.DeviceType.SPEAKER, + id: 1, + name: "", + address: "", + sampleRates: [44100], + channelCounts: [2], + channelMasks: [0], + networkId: audio.LOCAL_NETWORK_ID, + interruptGroupId: 1, + volumeGroupId: 1, +}]; + +async function selectOutputDevice() { + audioRoutingManager.selectOutputDevice(outputAudioDeviceDescriptor, (err) => { + if (err) { + console.error(`Invoke selectOutputDevice failed, code is ${err.code}, message is ${err.message}`); + } else { + console.info('Invoke selectOutputDevice succeeded.'); + } + }); +} +``` + +### Continuing the Playing of the Specified Audio Stream + +1. [Obtain the output device information](audio-output-device-management.md#obtaining-output-device-information). + +2. Create an **AudioRendererFilter** instance, with **uid** to specify an application and **rendererId** to specify an audio stream. + +3. Create an **AudioDeviceDescriptor** instance to describe an audio output device. + +4. Call **selectOutputDeviceByFilter** to select a remote device, on which the specified audio stream will continue playing. + +```ts +let outputAudioRendererFilter = { + uid: 20010041, + rendererInfo: { + content: audio.ContentType.CONTENT_TYPE_MUSIC, + usage: audio.StreamUsage.STREAM_USAGE_MEDIA, + rendererFlags: 0 }, + rendererId: 0 }; + +let outputAudioDeviceDescriptor = [{ + deviceRole: audio.DeviceRole.OUTPUT_DEVICE, + deviceType: audio.DeviceType.SPEAKER, + id: 1, + name: "", + address: "", + sampleRates: [44100], + channelCounts: [2], + channelMasks: [0], + networkId: audio.LOCAL_NETWORK_ID, + interruptGroupId: 1, + volumeGroupId: 1, +}]; + +async function selectOutputDeviceByFilter() { + audioRoutingManager.selectOutputDeviceByFilter(outputAudioRendererFilter, outputAudioDeviceDescriptor, (err) => { + if (err) { + console.error(`Invoke selectOutputDeviceByFilter failed, code is ${err.code}, message is ${err.message}`); + } else { + console.info('Invoke selectOutputDeviceByFilter succeeded.'); + } + }); +} +``` diff --git a/en/application-dev/media/distributed-avsession-overview.md b/en/application-dev/media/distributed-avsession-overview.md new file mode 100644 index 0000000000000000000000000000000000000000..ff293ed7332d0a9c5e66632f91c943af42d28030 --- /dev/null +++ b/en/application-dev/media/distributed-avsession-overview.md @@ -0,0 +1,54 @@ +# Distributed AVSession Overview + +With distributed AVSession, OpenHarmony allows users to project locally played media to a distributed device for a better playback effect. For example, users can project audio played on a tablet to a smart speaker. + +After the user initiates a projection, the media information is synchronized to the distributed device in real time, and the user can control the playback (for example, previous, next, play, and pause) on the distributed device. From the perspective of the user, the playback control operation on the distributed device is the same as that on the local device. + + +## Interaction Process + +After the local device is paired with a distributed device, the controller on the local device projects media to the distributed device through AVSessionManager, thereby implementing a distributed AVSession. The interaction process is shown below. + +![Distributed AVSession Interaction Process](figures/distributed-avsession-interaction-process.png) + +The AVSession service on the distributed device automatically creates an **AVSession** object for information synchronization with the local device. The information to synchronize includes the session information, control commands, and events. + +## Distributed AVSession Process + +After the user triggers a projection, the remote device automatically creates an **AVSession** object to associate it with that on the local device. The detailed process is as follows: + +1. After receiving an audio device switching command, the AVSession service on the local device synchronizes the session information to the distributed device. + +2. The controller (for example, Media Controller) on the distributed device detects the new **AVSession** object and creates an **AVSessionController** object for it. + +3. Through the **AVSessionController** object, the controller on the distributed device sends a control command to the **AVSession** object on the local device. + +4. Upon the receipt of the control command, the **AVSession** object on the local device triggers a callback to the local audio application. + +5. The **AVSession** object on the local device synchronizes the new session information to the controller on the distributed device in real time. + +6. When the remote device is disconnected, the audio stream is switched back to the local device and the playback is paused. (The audio module completes the switchback, and the AVSession service instructs the application to pause the playback.) + +## Distributed AVSession Scenarios + +There are two scenarios for projection implemented using the distributed AVSession: + +- System projection: The controller (for example, Media Controller) initiates a projection. + +This type of projection takes effect for all applications. After a system projection, all audios on the local device are played from the distributed device by default. + +- Application projection: An audio and video application integrates the projection component to initiate a projection. (This scenario is not supported yet.) + + This type of projection takes effect for a single application. After an application projection, audio of the application on the local device is played from the distributed device, and audio of other applications is still played from the local device. + +Projection preemption is supported. If application A initiates a projection to a remote device and then application B initiates a projection to the same device, then audio of application B is played on the remote device. + +## Relationship Between Distributed AVSession and Distributed Audio Playback + +The internal logic for the distributed AVSession to implement projection is as follows: + +- API related to [distributed audio playback](distributed-audio-playback.md) are called to project audio streams to the distributed device. + +- The distributed capability is used to project the session metadata to the distributed device for display. + +Projection implemented by using the distributed AVSession not only enables audio to be played on the distributed device, but also enables media information to be displayed on the distributed device. It also allows the user to perform playback control on the distributed device. diff --git a/en/application-dev/media/figures/audio-capturer-state.png b/en/application-dev/media/figures/audio-capturer-state.png deleted file mode 100644 index 52b5556260dbf78c5e816b37013248a07e8dbbc6..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/audio-capturer-state.png and /dev/null differ diff --git a/en/application-dev/media/figures/audio-playback-interaction-diagram.png b/en/application-dev/media/figures/audio-playback-interaction-diagram.png new file mode 100644 index 0000000000000000000000000000000000000000..b96179b6b610463bc34d2515b145a57b29e574cb Binary files /dev/null and b/en/application-dev/media/figures/audio-playback-interaction-diagram.png differ diff --git a/en/application-dev/media/figures/audio-renderer-state.png b/en/application-dev/media/figures/audio-renderer-state.png deleted file mode 100644 index 9ae30c2a9306dc85662405c36da9e11d07ed9a2a..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/audio-renderer-state.png and /dev/null differ diff --git a/en/application-dev/media/figures/audio-stream-mgmt-invoking-relationship.png b/en/application-dev/media/figures/audio-stream-mgmt-invoking-relationship.png new file mode 100644 index 0000000000000000000000000000000000000000..50ad902dd8b55a91a220e2705fea5674cd855ae6 Binary files /dev/null and b/en/application-dev/media/figures/audio-stream-mgmt-invoking-relationship.png differ diff --git a/en/application-dev/media/figures/audiocapturer-status-change.png b/en/application-dev/media/figures/audiocapturer-status-change.png new file mode 100644 index 0000000000000000000000000000000000000000..aadbc4fb6470b7cdc0f399ee5954a96c01a7f7c3 Binary files /dev/null and b/en/application-dev/media/figures/audiocapturer-status-change.png differ diff --git a/en/application-dev/media/figures/audiorenderer-status-change.png b/en/application-dev/media/figures/audiorenderer-status-change.png new file mode 100644 index 0000000000000000000000000000000000000000..a721044f7aeccfed0260176963d192cac40dd8a6 Binary files /dev/null and b/en/application-dev/media/figures/audiorenderer-status-change.png differ diff --git a/en/application-dev/media/figures/avsession-interaction-process.png b/en/application-dev/media/figures/avsession-interaction-process.png new file mode 100644 index 0000000000000000000000000000000000000000..2347599b7d118c45c2d2eb58708729f91c4dc801 Binary files /dev/null and b/en/application-dev/media/figures/avsession-interaction-process.png differ diff --git a/en/application-dev/media/figures/bitmap-operation.png b/en/application-dev/media/figures/bitmap-operation.png new file mode 100644 index 0000000000000000000000000000000000000000..c5107dbabd86fdc29863d5f25947b447d9c1deeb Binary files /dev/null and b/en/application-dev/media/figures/bitmap-operation.png differ diff --git a/en/application-dev/media/figures/camera-development-model.png b/en/application-dev/media/figures/camera-development-model.png new file mode 100644 index 0000000000000000000000000000000000000000..fa97f369dda840cb474bc8fffbb7396b8a7b6508 Binary files /dev/null and b/en/application-dev/media/figures/camera-development-model.png differ diff --git a/en/application-dev/media/figures/camera-workflow.png b/en/application-dev/media/figures/camera-workflow.png new file mode 100644 index 0000000000000000000000000000000000000000..31a7e814724cf97a80a5cc8b88778334ccb352fb Binary files /dev/null and b/en/application-dev/media/figures/camera-workflow.png differ diff --git a/en/application-dev/media/figures/camera_framework_process.png b/en/application-dev/media/figures/camera_framework_process.png deleted file mode 100644 index bf4b6806fb19e087318306dbc7f9a4b0576273cd..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/camera_framework_process.png and /dev/null differ diff --git a/en/application-dev/media/figures/cropping.jpeg b/en/application-dev/media/figures/cropping.jpeg new file mode 100644 index 0000000000000000000000000000000000000000..a564818815eb3fde13a40ef02d0811bd56803fb9 Binary files /dev/null and b/en/application-dev/media/figures/cropping.jpeg differ diff --git a/en/application-dev/media/figures/distributed-avsession-interaction-process.png b/en/application-dev/media/figures/distributed-avsession-interaction-process.png new file mode 100644 index 0000000000000000000000000000000000000000..d16e362db22857b2ddba3cdbf2142c3759f73fc8 Binary files /dev/null and b/en/application-dev/media/figures/distributed-avsession-interaction-process.png differ diff --git a/en/application-dev/media/figures/en-us_image_audio_player.png b/en/application-dev/media/figures/en-us_image_audio_player.png deleted file mode 100644 index 4edcec759e7b8507d605823f157ba9c6c1108fcd..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/en-us_image_audio_player.png and /dev/null differ diff --git a/en/application-dev/media/figures/en-us_image_audio_recorder_state_machine.png b/en/application-dev/media/figures/en-us_image_audio_recorder_state_machine.png deleted file mode 100644 index 8cd657cf19c48da5e52809bad387984f50d5a3c7..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/en-us_image_audio_recorder_state_machine.png and /dev/null differ diff --git a/en/application-dev/media/figures/en-us_image_audio_recorder_zero.png b/en/application-dev/media/figures/en-us_image_audio_recorder_zero.png deleted file mode 100644 index 7c33fcc1723fcdcc468bd3a6004de8b03b20100b..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/en-us_image_audio_recorder_zero.png and /dev/null differ diff --git a/en/application-dev/media/figures/en-us_image_audio_routing_manager.png b/en/application-dev/media/figures/en-us_image_audio_routing_manager.png deleted file mode 100644 index 710679f6cac0c30d06dffa97b0e80b3cebe80f79..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/en-us_image_audio_routing_manager.png and /dev/null differ diff --git a/en/application-dev/media/figures/en-us_image_audio_state_machine.png b/en/application-dev/media/figures/en-us_image_audio_state_machine.png deleted file mode 100644 index 22b7aeaa1db5b369d3daf44854d7f7f9a00f775b..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/en-us_image_audio_state_machine.png and /dev/null differ diff --git a/en/application-dev/media/figures/en-us_image_audio_stream_manager.png b/en/application-dev/media/figures/en-us_image_audio_stream_manager.png deleted file mode 100644 index 1f326d4bd0798dd5ecc0b55130904cbf87d2ea1f..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/en-us_image_audio_stream_manager.png and /dev/null differ diff --git a/en/application-dev/media/figures/en-us_image_audio_volume_manager.png b/en/application-dev/media/figures/en-us_image_audio_volume_manager.png deleted file mode 100644 index 0d47fbfacce9c1ff48811e1cf5d764231bdb596b..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/en-us_image_audio_volume_manager.png and /dev/null differ diff --git a/en/application-dev/media/figures/en-us_image_avplayer_audio.png b/en/application-dev/media/figures/en-us_image_avplayer_audio.png deleted file mode 100644 index b5eb9b02a977d0e4551a236c7cc8a154710f5517..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/en-us_image_avplayer_audio.png and /dev/null differ diff --git a/en/application-dev/media/figures/en-us_image_avplayer_state_machine.png b/en/application-dev/media/figures/en-us_image_avplayer_state_machine.png deleted file mode 100644 index 12adecd8865a9ff1faaa2c6654e8558f2fac77aa..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/en-us_image_avplayer_state_machine.png and /dev/null differ diff --git a/en/application-dev/media/figures/en-us_image_avplayer_video.png b/en/application-dev/media/figures/en-us_image_avplayer_video.png deleted file mode 100644 index 54525ebed1d1792f43156ffbeb1ffa37f56d8237..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/en-us_image_avplayer_video.png and /dev/null differ diff --git a/en/application-dev/media/figures/en-us_image_avrecorder_module_interaction.png b/en/application-dev/media/figures/en-us_image_avrecorder_module_interaction.png deleted file mode 100644 index 7d5da3bdc91fe8fb7be9f0b4054f934ec054b8e6..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/en-us_image_avrecorder_module_interaction.png and /dev/null differ diff --git a/en/application-dev/media/figures/en-us_image_avrecorder_state_machine.png b/en/application-dev/media/figures/en-us_image_avrecorder_state_machine.png deleted file mode 100644 index 7ffcb21f09365e9b072bdaf48f8b98d7d45a8aaa..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/en-us_image_avrecorder_state_machine.png and /dev/null differ diff --git a/en/application-dev/media/figures/en-us_image_avsession.png b/en/application-dev/media/figures/en-us_image_avsession.png deleted file mode 100644 index 3289bc4ca3c54eb3e99c9230c821380f8f7c0c5b..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/en-us_image_avsession.png and /dev/null differ diff --git a/en/application-dev/media/figures/en-us_image_video_player.png b/en/application-dev/media/figures/en-us_image_video_player.png deleted file mode 100644 index f9b4aabdc7215f22788d92c68ef353fafffda1c3..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/en-us_image_video_player.png and /dev/null differ diff --git a/en/application-dev/media/figures/en-us_image_video_recorder_state_machine.png b/en/application-dev/media/figures/en-us_image_video_recorder_state_machine.png deleted file mode 100644 index 3e81dcc18d1f47b6de087a7a88fd75b308ea51a0..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/en-us_image_video_recorder_state_machine.png and /dev/null differ diff --git a/en/application-dev/media/figures/en-us_image_video_recorder_zero.png b/en/application-dev/media/figures/en-us_image_video_recorder_zero.png deleted file mode 100644 index a7f7fa09392eb916132d891a84d62f31f0f27782..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/en-us_image_video_recorder_zero.png and /dev/null differ diff --git a/en/application-dev/media/figures/en-us_image_video_state_machine.png b/en/application-dev/media/figures/en-us_image_video_state_machine.png deleted file mode 100644 index c0595ed5120b632142d6da8841c9e45277b10f55..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/figures/en-us_image_video_state_machine.png and /dev/null differ diff --git a/en/application-dev/media/figures/horizontal-flip.jpeg b/en/application-dev/media/figures/horizontal-flip.jpeg new file mode 100644 index 0000000000000000000000000000000000000000..f43e4f6ab2adc68bf0f90eaf8177d36ee91f32ac Binary files /dev/null and b/en/application-dev/media/figures/horizontal-flip.jpeg differ diff --git a/en/application-dev/media/figures/image-development-process.png b/en/application-dev/media/figures/image-development-process.png new file mode 100644 index 0000000000000000000000000000000000000000..47db9d3faf7f8bffc80f63995dc73d0ad32799e5 Binary files /dev/null and b/en/application-dev/media/figures/image-development-process.png differ diff --git a/en/application-dev/media/figures/invoking-relationship-recording-stream-mgmt.png b/en/application-dev/media/figures/invoking-relationship-recording-stream-mgmt.png new file mode 100644 index 0000000000000000000000000000000000000000..a1f404f67bf18d91c2cc42ab65d8c7c5f01518a8 Binary files /dev/null and b/en/application-dev/media/figures/invoking-relationship-recording-stream-mgmt.png differ diff --git a/en/application-dev/media/figures/local-avsession-interaction-process.png b/en/application-dev/media/figures/local-avsession-interaction-process.png new file mode 100644 index 0000000000000000000000000000000000000000..dfccf9c6874f26a7e030189191f34248b7230b1a Binary files /dev/null and b/en/application-dev/media/figures/local-avsession-interaction-process.png differ diff --git a/en/application-dev/media/figures/media-system-framework.png b/en/application-dev/media/figures/media-system-framework.png new file mode 100644 index 0000000000000000000000000000000000000000..f1b92795c05db2caa6869acfba865f585a947c19 Binary files /dev/null and b/en/application-dev/media/figures/media-system-framework.png differ diff --git a/en/application-dev/media/figures/offsets.jpeg b/en/application-dev/media/figures/offsets.jpeg new file mode 100644 index 0000000000000000000000000000000000000000..ab4c87a69bae55a62feddc0ca61a0ef1081bf199 Binary files /dev/null and b/en/application-dev/media/figures/offsets.jpeg differ diff --git a/en/application-dev/media/figures/original-drawing.jpeg b/en/application-dev/media/figures/original-drawing.jpeg new file mode 100644 index 0000000000000000000000000000000000000000..01a0b0d7022dfc0130029154fec7321bc62dfe36 Binary files /dev/null and b/en/application-dev/media/figures/original-drawing.jpeg differ diff --git a/en/application-dev/media/figures/photographing-development-process.png b/en/application-dev/media/figures/photographing-development-process.png new file mode 100644 index 0000000000000000000000000000000000000000..b7ee61acfa63da55ef1389212e090da14a091a68 Binary files /dev/null and b/en/application-dev/media/figures/photographing-development-process.png differ diff --git a/en/application-dev/media/figures/playback-status-change.png b/en/application-dev/media/figures/playback-status-change.png new file mode 100644 index 0000000000000000000000000000000000000000..860764d3d15b93e544a6f27316584963acba2f0f Binary files /dev/null and b/en/application-dev/media/figures/playback-status-change.png differ diff --git a/en/application-dev/media/figures/recording-development-process.png b/en/application-dev/media/figures/recording-development-process.png new file mode 100644 index 0000000000000000000000000000000000000000..c29043a1f8b9255664969b4e0b0a1ca971d4e1f7 Binary files /dev/null and b/en/application-dev/media/figures/recording-development-process.png differ diff --git a/en/application-dev/media/figures/recording-status-change.png b/en/application-dev/media/figures/recording-status-change.png new file mode 100644 index 0000000000000000000000000000000000000000..9f15af9c1992e34fa7d750d08fd0245b6cb3ba67 Binary files /dev/null and b/en/application-dev/media/figures/recording-status-change.png differ diff --git a/en/application-dev/media/figures/rotate.jpeg b/en/application-dev/media/figures/rotate.jpeg new file mode 100644 index 0000000000000000000000000000000000000000..5965abb46dc9648a3dfd9136e7cc0b5c5203e6a7 Binary files /dev/null and b/en/application-dev/media/figures/rotate.jpeg differ diff --git a/en/application-dev/media/figures/transparency.png b/en/application-dev/media/figures/transparency.png new file mode 100644 index 0000000000000000000000000000000000000000..b9b43939f0dad8ee40bf0b6b7e40ddf49d141c66 Binary files /dev/null and b/en/application-dev/media/figures/transparency.png differ diff --git a/en/application-dev/media/figures/vertical-flip.jpeg b/en/application-dev/media/figures/vertical-flip.jpeg new file mode 100644 index 0000000000000000000000000000000000000000..8ef368d6bb914815a90c8d82352cbd6fd9ab505c Binary files /dev/null and b/en/application-dev/media/figures/vertical-flip.jpeg differ diff --git a/en/application-dev/media/figures/video-playback-interaction-diagram.png b/en/application-dev/media/figures/video-playback-interaction-diagram.png new file mode 100644 index 0000000000000000000000000000000000000000..93778e5fd397820e92b03f60a01076f251348ee6 Binary files /dev/null and b/en/application-dev/media/figures/video-playback-interaction-diagram.png differ diff --git a/en/application-dev/media/figures/video-playback-status-change.png b/en/application-dev/media/figures/video-playback-status-change.png new file mode 100644 index 0000000000000000000000000000000000000000..860764d3d15b93e544a6f27316584963acba2f0f Binary files /dev/null and b/en/application-dev/media/figures/video-playback-status-change.png differ diff --git a/en/application-dev/media/figures/video-recording-interaction-diagram.png b/en/application-dev/media/figures/video-recording-interaction-diagram.png new file mode 100644 index 0000000000000000000000000000000000000000..3fbbffe30f5ab06ba0f0a9e6487c76cecd5546c4 Binary files /dev/null and b/en/application-dev/media/figures/video-recording-interaction-diagram.png differ diff --git a/en/application-dev/media/figures/video-recording-status-change.png b/en/application-dev/media/figures/video-recording-status-change.png new file mode 100644 index 0000000000000000000000000000000000000000..9f15af9c1992e34fa7d750d08fd0245b6cb3ba67 Binary files /dev/null and b/en/application-dev/media/figures/video-recording-status-change.png differ diff --git a/en/application-dev/media/figures/zoom.jpeg b/en/application-dev/media/figures/zoom.jpeg new file mode 100644 index 0000000000000000000000000000000000000000..977db6cfbc5b81f5396e4d81f8954a9f7d4168e4 Binary files /dev/null and b/en/application-dev/media/figures/zoom.jpeg differ diff --git a/en/application-dev/media/image-decoding.md b/en/application-dev/media/image-decoding.md new file mode 100644 index 0000000000000000000000000000000000000000..d90f4b1ee653f97d2431fcf0c52923ba0fb81acf --- /dev/null +++ b/en/application-dev/media/image-decoding.md @@ -0,0 +1,141 @@ +# Image Decoding + +Image decoding refers to the process of decoding an archived image in a supported format into a [pixel map](image-overview.md) for image display or [processing](image-transformation.md). Currently, the following image formats are supported: JPEG, PNG, GIF, RAW, WebP, BMP, and SVG. + +## How to Develop + +Read [Image](../reference/apis/js-apis-image.md#imagesource) for APIs related to image decoding. + +1. Import the image module. + + ```ts + import image from '@ohos.multimedia.image'; + ``` + +2. Obtain an image. + - Method 1: Obtain the sandbox path. For details about how to obtain the sandbox path, see [Obtaining the Application Development Path](../application-models/application-context-stage.md#obtaining-the-application-development-path). For details about the application sandbox and how to push files to the application sandbox, see [File Management](../file-management/app-sandbox-directory.md). + + ```ts + // Code on the stage model + const context = getContext(this); + const filePath = context.cacheDir + '/test.jpg'; + ``` + + ```ts + // Code on the FA model + import featureAbility from '@ohos.ability.featureAbility'; + + const context = featureAbility.getContext(); + const filePath = context.getCacheDir() + "/test.jpg"; + ``` + - Method 2: Obtain the file descriptor of the image through the sandbox path. For details, see [file.fs API Reference] (../reference/apis/js-apis-file-fs.md). + To use this method, you must import the \@ohos.file.fs module first. + + ```ts + import fs from '@ohos.file.fs'; + ``` + + Then call **fs.openSync()** to obtain the file descriptor. + + ```ts + // Code on the stage model + const context = getContext(this); + const filePath = context.cacheDir + '/test.jpg'; + const file = fs.openSync(filePath, fs.OpenMode.READ_WRITE); + const fd = file?.fd; + ``` + + ```ts + // Code on the FA model + import featureAbility from '@ohos.ability.featureAbility'; + + const context = featureAbility.getContext(); + const filePath = context.getCacheDir() + "/test.jpg"; + const file = fs.openSync(filePath, fs.OpenMode.READ_WRITE); + const fd = file?.fd; + ``` + - Method 3: Obtain the array buffer of the resource file through the resource manager. For details, see [ResourceManager API Reference](../reference/apis/js-apis-resource-manager.md#getrawfilecontent9-1). + + ```ts + // Code on the stage model + const context = getContext(this); + // Obtain a resource manager. + const resourceMgr = context.resourceManager; + ``` + + ```ts + // Code on the FA model + // Import the resourceManager module. + import resourceManager from '@ohos.resourceManager'; + const resourceMgr = await resourceManager.getResourceManager(); + ``` + + The method of obtaining the resource manager varies according to the application model. After obtaining the resource manager, call **resourceMgr.getRawFileContent()** to obtain the array buffer of the resource file. + + ```ts + const fileData = await resourceMgr.getRawFileContent('test.jpg'); + // Obtain the array buffer of the image. + const buffer = fileData.buffer; + ``` + +3. Create an **ImageSource** instance. + - Method 1: Create an **ImageSource** instance using the sandbox path. The sandbox path can be obtained by using method 1 in step 2. + + ```ts + // path indicates the obtained sandbox path. + const imageSource = image.createImageSource(filePath); + ``` + - Method 2: Create an **ImageSource** instance using the file descriptor. The file descriptor can be obtained by using method 2 in step 2. + + ```ts + // fd is the obtained file descriptor. + const imageSource = image.createImageSource(fd); + ``` + - Method 3: Create an **ImageSource** instance using a buffer array. The buffer array can be obtained by using method 3 in step 2. + + ```ts + const imageSource = image.createImageSource(buffer); + ``` + +4. Set **DecodingOptions** and decode the image to obtain a pixel map. + + ```ts + let decodingOptions = { + editable: true, + desiredPixelFormat: 3, + } + // Create a pixel map and perform rotation and scaling on it. + const pixelMap = await imageSource.createPixelMap(decodingOptions); + ``` + + After the decoding is complete and the pixel map is obtained, you can perform subsequent [image processing](image-transformation.md). + +## Sample Code - Decoding an Image in Resource Files + +1. Obtain a resource manager. + + ```ts + const context = getContext(this); + // Obtain a resourceManager instance. + const resourceMgr = context.resourceManager; + ``` + +2. Obtain the array buffer of the **test.jpg** file in the **rawfile** folder. + + ```ts + const fileData = await resourceMgr.getRawFileContent('test.jpg'); + // Obtain the array buffer of the image. + const buffer = fileData.buffer; + ``` + +3. Create an **ImageSource** instance. + + ```ts + const imageSource = image.createImageSource(buffer); + ``` + +4. Create a **PixelMap** instance. + + ```ts + const pixelMap = await imageSource.createPixelMap(); + ``` diff --git a/en/application-dev/media/image-encoding.md b/en/application-dev/media/image-encoding.md new file mode 100644 index 0000000000000000000000000000000000000000..96e23b6ba16c63bdaf282dbaf9abc01d95dd6221 --- /dev/null +++ b/en/application-dev/media/image-encoding.md @@ -0,0 +1,48 @@ +# Image Encoding + +Image encoding refers to the process of encoding a pixel map into an archived image in different formats (only in JPEG and WebP currently) for subsequent processing, such as storage and transmission. + +## How to Develop + +Read [Image](../reference/apis/js-apis-image.md#imagepacker) for APIs related to image encoding. + +1. Create an **ImagePacker** object. + + ```ts + // Import the required module. + import image from '@ohos.multimedia.image'; + + const imagePackerApi = image.createImagePacker(); + ``` + +2. Set the encoding output stream and encoding parameters. + + **format** indicates the image encoding format, and **quality** indicates the image quality. The value ranges from 0 to 100, and the value 100 indicates the optimal quality. + + ```ts + let packOpts = { format:"image/jpeg", quality:98 }; + ``` + +3. [Create a PixelMap object or an ImageSource object](image-decoding.md). + +4. Encode the image and save the encoded image. + + Method 1: Use the **PixelMap** object for encoding. + + ```ts + imagePackerApi.packing(pixelMap, packOpts).then( data => { + // data is the file stream obtained after packing. You can write the file and save it to obtain an image. + }).catch(error => { + console.error('Failed to pack the image. And the error is: ' + error); + }) + ``` + + Method 2: Use the **ImageSource** object for encoding. + + ```ts + imagePackerApi.packing(imageSource, packOpts).then( data => { + // data is the file stream obtained after packing. You can write the file and save it to obtain an image. + }).catch(error => { + console.error('Failed to pack the image. And the error is: ' + error); + }) + ``` diff --git a/en/application-dev/media/image-overview.md b/en/application-dev/media/image-overview.md new file mode 100644 index 0000000000000000000000000000000000000000..854b84b77e93518a398390bac668c8d0c59dc5ad --- /dev/null +++ b/en/application-dev/media/image-overview.md @@ -0,0 +1,38 @@ +# Image Overview + +Image development is the process of parsing, processing, and constructing image pixel data to achieve the required image effect. Image development mainly involves image decoding, processing, and encoding. + +Before image development, be familiar with the following basic concepts: + +- Image decoding + + The operation of decoding an archived image in a supported format into a pixel map for image display or processing. Currently, the following image formats are supported: JPEG, PNG, GIF, RAW, WebP, BMP, and SVG. + +- Pixel map + + A bitmap that is not compressed after being decoded. It is used for image display or processing. + +- Image processing + + A series of operations on the pixel map, such as rotation, scaling, opacity setting, image information obtaining, and pixel data reading and writing. + +- Image encoding + + The operation of encoding a pixel map into an archived image in different formats (only in JPEG and WebP currently) for subsequent processing, such as storage and transmission. + +The figure below illustrates the image development process. + +**Figure 1** Image development process +![Image development process](figures/image-development-process.png) + +1. Image retrieval: Obtain a raw image through the application sandbox. + +2. Instance creation: Create an **ImageSource** instance, which is the source class of decoded images and is used to obtain or modify image information. + +3. [Image decoding](image-decoding.md): Decode the image source to generate a pixel map. + +4. [Image processing](image-transformation.md): Process the pixel map by modifying the image attributes to implement image rotation, scaling, and cropping, and then use the [Image component](../ui/arkts-graphics-display.md) to display the image. + +5. [Image encoding](image-encoding.md): Use the **ImagePacker** class to compress and encode the pixel map or image source to generate a new image. + +In addition to the preceding basic image development capabilities, OpenHarmony provides the [image tool](image-tool.md) to ease your development. diff --git a/en/application-dev/media/image-pixelmap-operation.md b/en/application-dev/media/image-pixelmap-operation.md new file mode 100644 index 0000000000000000000000000000000000000000..d9b17b2c4dc5e5911e921d19a46d1b3066af5100 --- /dev/null +++ b/en/application-dev/media/image-pixelmap-operation.md @@ -0,0 +1,60 @@ +# Pixel Map Operation + +To process a certain area in an image, you can perform pixel map operations, which are usually used to beautify the image. + +As shown in the figure below, the pixel data of a rectangle in an image is read, modified, and then written back to the corresponding area of the original image. + +**Figure 1** Pixel map operation +![Pixel map operation](figures/bitmap-operation.png) + +## How to Develop + +Read [Image](../reference/apis/js-apis-image.md#pixelmap7) for APIs related to pixel map operations. + +1. Complete [image decoding](image-decoding.md#how-to-develop) and obtain a **PixelMap** object. + +2. Obtain information from the **PixelMap** object. + + ```ts + // Obtain the total number of bytes of this pixel map. + let pixelBytesNumber = pixelMap.getPixelBytesNumber(); + // Obtain the number of bytes per row of this pixel map. + let rowCount = pixelMap.getBytesNumberPerRow(); + // Obtain the pixel density of the image. Pixel density refers to the number of pixels per inch of an image. A larger value of the pixel density indicates a finer image. + let getDensity = pixelMap.getDensity(); + ``` + +3. Read and modify the pixel data of the target area, and write the modified data back to the original image. + + ```ts + // Scenario 1: Read the pixel data of the entire image and write the modified data to an array buffer. + const readBuffer = new ArrayBuffer(pixelBytesNumber); + pixelMap.readPixelsToBuffer(readBuffer).then(() => { + console.info('Succeeded in reading image pixel data.'); + }).catch(error => { + console.error('Failed to read image pixel data. And the error is: ' + error); + }) + + // Scenario 2: Read the pixel data in a specified area and write the modified data to area.pixels. + const area = { + pixels: new ArrayBuffer(8), + offset: 0, + stride: 8, + region: { size: { height: 1, width: 2 }, x: 0, y: 0 } + } + pixelMap.readPixels(area).then(() => { + console.info('Succeeded in reading the image data in the area.'); + }).catch(error => { + console.error('Failed to read the image data in the area. And the error is: ' + error); + }) + + // The read image data can be used independently (by creating a pixel map) or modified as required. + // Write area.pixels to the specified area. + pixelMap.writePixels(area).then(() => { + console.info('Succeeded to write pixelMap into the specified area.'); + }) + + // Write the image data result to a pixel map. + const writeColor = new ArrayBuffer(96); + pixelMap.writeBufferToPixels(writeColor, () => {}); + ``` diff --git a/en/application-dev/media/image-tool.md b/en/application-dev/media/image-tool.md new file mode 100644 index 0000000000000000000000000000000000000000..16748ff0b56557005793cdbe2798477995412cdf --- /dev/null +++ b/en/application-dev/media/image-tool.md @@ -0,0 +1,43 @@ +# Image Tool + +The image tool provides the capabilities of reading and editing Exchangeable Image File Format (EXIF) data of an image. + +EXIF is a file format dedicated for photos taken by digital cameras and is used to record attributes and shooting data of the photos. Currently, the image tool supports images in JPEG format only. + +Users may need to view or modify the EXIF data of photos in the Gallery application, for example, when the manual lens parameters of the camera are not automatically written as part of the EXIF data or the shooting time is incorrect due to camera power-off. + +Currently, OpenHarmony allows you to view and modify part of EXIF data. For details, see [EIXF](../reference/apis/js-apis-image.md#propertykey7). + +## How to Develop + +Read [Image](../reference/apis/js-apis-image.md#getimageproperty7) for APIs used to read and edit EXIF data. + +1. Obtain the image and create an **ImageSource** object. + + ```ts + // Import the required module. + import image from '@ohos.multimedia.image'; + + // Obtain the sandbox path and create an ImageSource object. + const fd =...; //Obtain the file descriptor of the image to be processed. + const imageSource = image.createImageSource(fd); + ``` + +2. Read and edit EXIF data. + + ```ts + // Read the EXIF data, where BitsPerSample indicates the number of bits per pixel. + imageSource.getImageProperty('BitsPerSample', (error, data) => { + if (error) { + console.error('Failed to get the value of the specified attribute key of the image.And the error is: ' + error); + } else { + console.info('Succeeded in getting the value of the specified attribute key of the image ' + data); + } + }) + + // Edit the EXIF data. + imageSource.modifyImageProperty('ImageWidth', '120').then(() => { + const width = imageSource.getImageProperty("ImageWidth"); + console.info('The new imageWidth is ' + width); + }) + ``` diff --git a/en/application-dev/media/image-transformation.md b/en/application-dev/media/image-transformation.md new file mode 100644 index 0000000000000000000000000000000000000000..8965d409dda0fa9271feebb34b3b936c4b624bc6 --- /dev/null +++ b/en/application-dev/media/image-transformation.md @@ -0,0 +1,93 @@ +# Image Transformation + +Image processing refers to a series of operations performed on the pixel map, such as obtaining image information, cropping, scaling, translating, rotating, flipping, setting opacity, and reading and writing pixel data. These operations can be classified into image transformation and [pixel map operation](image-pixelmap-operation.md). This topic describes the image transformation operations that you can perform. + +## How to Develop + +Read [Image](../reference/apis/js-apis-image.md#pixelmap7) for APIs related to image transformation. + +1. Complete [image decoding](image-decoding.md#how-to-develop) and obtain a **PixelMap** object. + +2. Obtain image information. + + ``` + // Obtain the image size. + pixelMap.getImageInfo().then( info => { + console.info('info.width = ' + info.size.width); + console.info('info.height = ' + info.size.height); + }).catch((err) => { + console.error("Failed to obtain the image pixel map information.And the error is: " + err); + }); + ``` + +3. Perform image transformation. + + Original image: + + ![Original drawing](figures/original-drawing.jpeg) + - Crop the image. + + ``` + // x: x-axis coordinate of the start point for cropping (0). + // y: y-axis coordinate of the start point for cropping (0). + // height: height after cropping (400), cropping from top to bottom. + // width: width after cropping (400), cropping from left to right. + pixelMap.crop({x: 0, y: 0, size: { height: 400, width: 400 } }); + ``` + + ![cropping](figures/cropping.jpeg) + + - Scale the image. + + ``` + // The width of the image after scaling is 0.5 of the original width. + // The height of the image after scaling is 0.5 of the original height. + pixelMap.scale(0.5, 0.5); + ``` + + ![zoom](figures/zoom.jpeg) + + - Translate the image. + + ``` + // Translate the image by 100 units downwards. + // Translate the image by 100 units to the right. + pixelMap.translate(100, 100); + ``` + + ![offsets](figures/offsets.jpeg) + + - Rotate the image. + + ``` + // Rate the image clockwise by 90°. + pixelMap.rotate(90); + ``` + + ![rotate](figures/rotate.jpeg) + + - Flip the image. + + ``` + // Flip the image vertically. + pixelMap.flip(false, true); + ``` + + ![Vertical Flip](figures/vertical-flip.jpeg) + + + ``` + // Flip the image horizontally. + pixelMap.flip(true, false); + ``` + + ![Horizontal Flip](figures/horizontal-flip.jpeg) + + - Set the opacity of the image. + + ``` + // Set the opacity to 0.5. + pixelMap.opacity(0.5); + ``` + + ![Transparency](figures/transparency.png) diff --git a/en/application-dev/media/image.md b/en/application-dev/media/image.md deleted file mode 100644 index fb4e648b56839ef76cb0e5277443605734d7ab6f..0000000000000000000000000000000000000000 --- a/en/application-dev/media/image.md +++ /dev/null @@ -1,283 +0,0 @@ -# Image Development - -## When to Use - -You can use image development APIs to decode images into pixel maps and encode the pixel maps into a supported format. - -## Available APIs - -For details about the APIs, see [Image Processing](../reference/apis/js-apis-image.md). - -## How to Develop - -### Full-Process Scenario - -The full process includes creating an instance, reading image information, reading and writing pixel maps, updating data, packaging pixels, and releasing resources. - -```js -const color = new ArrayBuffer(96); // Create a buffer to store image pixel data. -let opts = { alphaType: 0, editable: true, pixelFormat: 4, scaleMode: 1, size: { height: 2, width: 3 } } // Image pixel data. - -// Create a PixelMap object. -image.createPixelMap(color, opts, (err, pixelmap) => { - console.log('Succeeded in creating pixelmap.'); - // Failed to create the PixelMap object. - if (err) { - console.info('create pixelmap failed, err' + err); - return - } - - // Read pixels. - const area = { - pixels: new ArrayBuffer(8), - offset: 0, - stride: 8, - region: { size: { height: 1, width: 2 }, x: 0, y: 0 } - } - pixelmap.readPixels(area,() => { - let bufferArr = new Uint8Array(area.pixels); - let res = true; - for (let i = 0; i < bufferArr.length; i++) { - console.info(' buffer ' + bufferArr[i]); - if(res) { - if(bufferArr[i] == 0) { - res = false; - console.log('readPixels end.'); - break; - } - } - } - }) - - // Store pixels. - const readBuffer = new ArrayBuffer(96); - pixelmap.readPixelsToBuffer(readBuffer,() => { - let bufferArr = new Uint8Array(readBuffer); - let res = true; - for (let i = 0; i < bufferArr.length; i++) { - if(res) { - if (bufferArr[i] !== 0) { - res = false; - console.log('readPixelsToBuffer end.'); - break; - } - } - } - }) - - // Write pixels. - pixelmap.writePixels(area,() => { - const readArea = { pixels: new ArrayBuffer(20), offset: 0, stride: 8, region: { size: { height: 1, width: 2 }, x: 0, y: 0 }} - pixelmap.readPixels(readArea,() => { - let readArr = new Uint8Array(readArea.pixels); - let res = true; - for (let i = 0; i < readArr.length; i++) { - if(res) { - if (readArr[i] !== 0) { - res = false; - console.log('readPixels end.please check buffer'); - break; - } - } - } - }) - }) - - const writeColor = new ArrayBuffer(96); // Pixel data of the image. - // Write pixels to the buffer. - pixelmap.writeBufferToPixels(writeColor).then(() => { - const readBuffer = new ArrayBuffer(96); - pixelmap.readPixelsToBuffer(readBuffer).then (() => { - let bufferArr = new Uint8Array(readBuffer); - let res = true; - for (let i = 0; i < bufferArr.length; i++) { - if(res) { - if (bufferArr[i] !== i) { - res = false; - console.log('readPixels end.please check buffer'); - break; - } - } - } - }) - }) - - // Obtain image information. - pixelmap.getImageInfo((err, imageInfo) => { - // Failed to obtain the image information. - if (err || imageInfo == null) { - console.info('getImageInfo failed, err' + err); - return - } - if (imageInfo !== null) { - console.log('Succeeded in getting imageInfo'); - } - }) - - // Release the PixelMap object. - pixelmap.release(()=>{ - console.log('Succeeded in releasing pixelmap'); - }) -}) - -// Create an image source (uri). -let path = '/data/local/tmp/test.jpg'; -const imageSourceApi1 = image.createImageSource(path); - -// Create an image source (fd). -let fd = 29; -const imageSourceApi2 = image.createImageSource(fd); - -// Create an image source (data). -const data = new ArrayBuffer(96); -const imageSourceApi3 = image.createImageSource(data); - -// Release the image source. -imageSourceApi3.release(() => { - console.log('Succeeded in releasing imagesource'); -}) - -// Encode the image. -const imagePackerApi = image.createImagePacker(); -const imageSourceApi = image.createImageSource(0); -let packOpts = { format:"image/jpeg", quality:98 }; -imagePackerApi.packing(imageSourceApi, packOpts, (err, data) => { - if (err) { - console.info('packing from imagePackerApi failed, err' + err); - return - } - console.log('Succeeded in packing'); -}) - -// Release the ImagePacker object. -imagePackerApi.release(); -``` - -### Decoding Scenario - -```js -let path = '/data/local/tmp/test.jpg'; // Set the path for creating an image source. - -// Create an image source using a path. -const imageSourceApi = image.createImageSource(path); // '/data/local/tmp/test.jpg' - -// Set parameters. -let decodingOptions = { - sampleSize:1, // Sampling size of the thumbnail. - editable: true, // Whether the image can be edited. - desiredSize:{ width:1, height:2}, // Desired output size of the image. - rotateDegrees:10, // Rotation angle of the image. - desiredPixelFormat:2, // Decoded pixel format. - desiredRegion: { size: { height: 1, width: 2 }, x: 0, y: 0 }, // Region of the image to decode. - index:0// Image sequence number. - }; - -// Create a pixel map in callback mode. -imageSourceApi.createPixelMap(decodingOptions, (err, pixelmap) => { - // Failed to create the PixelMap object. - if (err) { - console.info('create pixelmap failed, err' + err); - return - } - console.log('Succeeded in creating pixelmap.'); -}) - -// Create a pixel map in promise mode. -imageSourceApi.createPixelMap().then(pixelmap => { - console.log('Succeeded in creating pixelmap.'); - - // Obtain the number of bytes in each line of pixels. - let num = pixelmap.getBytesNumberPerRow(); - - // Obtain the total number of pixel bytes. - let pixelSize = pixelmap.getPixelBytesNumber(); - - // Obtain the pixel map information. - pixelmap.getImageInfo().then( imageInfo => {}); - - // Release the PixelMap object. - pixelmap.release(()=>{ - console.log('Succeeded in releasing pixelmap'); - }) -}).catch(error => { - console.log('Failed in creating pixelmap.' + error); -}) -``` - -### Encoding Scenario - -```js -let path = '/data/local/tmp/test.png' // Set the path for creating an image source. - -// Set the image source. -const imageSourceApi = image.createImageSource(path); // '/data/local/tmp/test.png' - -// Print the error message if the image source fails to be created. -if (imageSourceApi == null) { - console.log('Failed in creating imageSource.'); -} - -// Create an image packer if the image source is successfully created. -const imagePackerApi = image.createImagePacker(); - -// Print the error information if the image packer fails to be created. -if (imagePackerApi == null) { - console.log('Failed in creating imagePacker.'); -} - -// Set encoding parameters if the image packer is successfully created. -let packOpts = { format:"image/jpeg", // The supported encoding format is jpg. - quality:98 } // Image quality, which ranges from 0 to 100. - -// Encode the image. -imagePackerApi.packing(imageSourceApi, packOpts) -.then( data => { - console.log('Succeeded in packing'); -}) - -// Release the image packer after the encoding is complete. -imagePackerApi.release(); - -// Obtain the image source information. -imageSourceApi.getImageInfo((err, imageInfo) => { - console.log('Succeeded in getting imageInfo'); -}) - -const array = new ArrayBuffer(100); // Incremental data. -// Update incremental data. -imageSourceApi.updateData(array, false, 0, 10,(error, data)=> {}) - -``` - -### Using ImageReceiver - -Example scenario: The camera functions as the client to transmit image data to the server. - -```js -public async init(surfaceId: any) { - - // (Server code) Create an ImageReceiver object. - let receiver = image.createImageReceiver(8 * 1024, 8, image.ImageFormat.JPEG, 1); - - // Obtain the surface ID. - receiver.getReceivingSurfaceId((err, surfaceId) => { - // Failed to obtain the surface ID. - if (err) { - console.info('getReceivingSurfaceId failed, err' + err); - return - } - console.info("receiver getReceivingSurfaceId success"); - }); - // Register a surface listener, which is triggered after the buffer of the surface is ready. - receiver.on('imageArrival', () => { - // Obtain the latest buffer of the surface. - receiver.readNextImage((err, img) => { - img.getComponent(4, (err, component) => { - // Consume component.byteBuffer. For example, save the content in the buffer as an image. - }) - }) - }) - - // Call a Camera API to transfer the surface ID to the camera, which then obtains the surface based on the surface ID and generates a surface buffer. -} -``` diff --git a/en/application-dev/media/local-avsession-overview.md b/en/application-dev/media/local-avsession-overview.md new file mode 100644 index 0000000000000000000000000000000000000000..2ced0a180e3bed3a1adea4e4b3ff196721bc23a8 --- /dev/null +++ b/en/application-dev/media/local-avsession-overview.md @@ -0,0 +1,63 @@ +# Local AVSession Overview + +## Interaction Process + +For a local AVSession, the data sources are on the local device. The figure below illustrates the interaction process. + +![Local AVSession Interaction Process](figures/local-avsession-interaction-process.png) + +This process involves two roles: provider and controller. + +In the local AVSession, the provider exchanges information with the controller through AVSessionManager. + +1. The provider creates an **AVSession** object through AVSessionManager. + +2. Through the **AVSession** object, the provider sets session metadata (such as the asset ID, title, and duration) and playback attributes (such as the playback state, speed, and position). + +3. The controller creates an **AVSessionController** object through AVSessionManager. + +4. Through the **AVSessionController** object, the controller listens for changes of the session metadata and playback attributes. + +5. Through the **AVSessionController** object, the controller sends control commands to the **AVSession** object. + +6. Through the **AVSession** object, the provider listens for the control commands, for example, play, playNext, fastForward, and setSpeed, from the controller. + +## AVSessionManager + +AVSessionManager provides the capability of managing sessions. It can create an **AVSession** object, create an **AVSessionController** object, send control commands, and listen for session state changes. + +Unlike the **AVSession** and **AVSessionController** objects, AVSessionManager is not a specific object, but the root namespace of AVSessions. You can import AVSessionManager as follows: + +```ts +import AVSessionManager from '@ohos.multimedia.avsession'; +``` + +All the APIs in the root namespace can be used as APIs of AVSessionManager. + +The code snippet below shows how the provider creates an **AVSession** object by using AVSessionManager: + +```ts +// Create an AVSession object. +async createSession() { + let session: AVSessionManager.AVSession = await AVSessionManager.createAVSession(this.context, 'SESSION_NAME', 'audio'); + console.info(`session create done : sessionId : ${session.sessionId}`); +} +``` + +The code snippet below shows how the controller creates an **AVSessionController** object by using AVSessionManager: + +```ts +// Create an AVSessionController object. +async createController() { + // Obtain the descriptors of all live AVSession objects. + let descriptorsArray: Array> = await AVSessionManager.getAllSessionDescriptors(); + if (descriptorsArray.length > 0) { + // For demonstration, the session ID of the first descriptor is used to create the AVSessionController object. + let sessionId: string = descriptorsArray[0].sessionId; + let avSessionController: AVSessionManager.AVSessionController = await AVSessionManager.createController(sessionId); + console.info(`controller create done : sessionId : ${avSessionController.sessionId}`); + } +} +``` + +For more information about AVSessionManager APIs, see [API Reference](../reference/apis/js-apis-avsession.md). diff --git a/en/application-dev/media/media-application-overview.md b/en/application-dev/media/media-application-overview.md new file mode 100644 index 0000000000000000000000000000000000000000..d350482e61e7bc9659054b0426c10ce07da88045 --- /dev/null +++ b/en/application-dev/media/media-application-overview.md @@ -0,0 +1,19 @@ +# Media Application Development Overview + +## Multimedia Subsystem Architecture + +The multimedia subsystem provides the capability of processing users' visual and auditory information. For example, it can be used to collect, compress, store, decompress, and play audio and video information. Based on the type of media information to process, the media system is usually divided into four modules: audio, media, camera, and image. + +As shown in the figure below, the multimedia subsystem provides APIs for developing audio/video, camera, and gallery applications, and provides adaptation and acceleration for different hardware chips. In the middle part, it provides core media functionalities and management mechanisms in the form of services. + +**Figure 1** Overall framework of the multimedia subsystem + +![Multimedia subsystem framework](figures/media-system-framework.png) + +- Audio module: provides interfaces and services for volume management, audio route management, and audio mixing management. + +- Media module: provides interfaces and services for audio and video decompression, playback, compression, and recording. + +- Camera module: provides interfaces and services for accurately controlling camera lenses and collecting visual information. + +- Image module: provides interfaces and services for image encoding, decoding, and processing. diff --git a/en/application-dev/media/mic-management.md b/en/application-dev/media/mic-management.md new file mode 100644 index 0000000000000000000000000000000000000000..952aeef3f3c607d3a2132eb6d1e0ab6bdd4490c9 --- /dev/null +++ b/en/application-dev/media/mic-management.md @@ -0,0 +1,114 @@ +# Microphone Management + +The microphone is used to record audio data. To deliver an optimal recording effect, you are advised to query the microphone state before starting recording and listen for state changes during recording. + +If the user mutes the microphone during audio recording, the recording process is normal, the size of the recorded file increases with the recording duration, but the data volume written into the file is 0. + +## How to Develop + +The **AudioVolumeGroupManager** class provides APIs for managing the microphone state. For details, see [API Reference](../reference/apis/js-apis-audio.md#audiovolumegroupmanager9). + +1. Create an **audioVolumeGroupManager** object. + + ```ts + import audio from '@ohos.multimedia.audio'; + + let audioVolumeGroupManager; + async function loadVolumeGroupManager() { // Create an audioVolumeGroupManager object. + const groupid = audio.DEFAULT_VOLUME_GROUP_ID; + audioVolumeGroupManager = await audio.getAudioManager().getVolumeManager().getVolumeGroupManager(groupid); + console.info('audioVolumeGroupManager create success.'); + } + ``` + +2. Call **on('micStateChange')** to listen for microphone state changes. When the microphone state changes, the application will be notified of the change. + + Currently, when multiple **AudioManager** instances are used in a single process, only the subscription of the last instance takes effect, and the subscription of other instances is overwritten (even if the last instance does not initiate a subscription). Therefore, you are advised to use a single **AudioManager** instance. + + + ```ts + async function on() { // Subscribe to microphone state changes. + audioVolumeGroupManager.on('micStateChange', (micStateChange) => { + console.info(`Current microphone status is: ${micStateChange.mute} `); + }); + } + ``` + +3. Call **isMicrophoneMute** to check whether the microphone is muted. If the returned value is **true**, the microphone is muted; otherwise, the microphone is not muted. + + ```ts + async function isMicrophoneMute() { // Check whether the microphone is muted. + await audioVolumeGroupManager.isMicrophoneMute().then((value) => { + console.info(`isMicrophoneMute is: ${value}.`); + }); + } + ``` + +4. Call **setMicrophoneMute** to mute or unmute the microphone. To mute the microphone, pass in **true**. To unmute the microphone, pass in **false**. + + ```ts + async function setMicrophoneMuteTrue() { // Pass in true to mute the microphone. + await audioVolumeGroupManager.setMicrophoneMute(true).then(() => { + console.info('setMicrophoneMute to mute.'); + }); + } + async function setMicrophoneMuteFalse() { // Pass in false to unmute the microphone. + await audioVolumeGroupManager.setMicrophoneMute(false).then(() => { + console.info('setMicrophoneMute to not mute.'); + }); + } + ``` + +## Sample Code + +Refer to the sample code below to complete the process of muting and unmuting the microphone. + +```ts +import audio from '@ohos.multimedia.audio'; + +@Entry +@Component +struct AudioVolumeGroup { + private audioVolumeGroupManager: audio.AudioVolumeGroupManager; + + async loadVolumeGroupManager() { + const groupid = audio.DEFAULT_VOLUME_GROUP_ID; + this.audioVolumeGroupManager = await audio.getAudioManager().getVolumeManager().getVolumeGroupManager(groupid); + console.info('audioVolumeGroupManager------create-------success.'); + } + + async on() { // Subscribe to microphone state changes. + await this.loadVolumeGroupManager(); + this.audioVolumeGroupManager.on('micStateChange', (micStateChange) => { + console.info(`Current microphone status is: ${micStateChange.mute} `); + }); + } + async isMicrophoneMute() { // Check whether the microphone is muted. + await this.audioVolumeGroupManager.isMicrophoneMute().then((value) => { + console.info(`isMicrophoneMute is: ${value}.`); + }); + } + async setMicrophoneMuteTrue() { // Mute the microphone. + await this.loadVolumeGroupManager(); + await this.audioVolumeGroupManager.setMicrophoneMute(true).then(() => { + console.info('setMicrophoneMute to mute.'); + }); + } + async setMicrophoneMuteFalse() { // Unmute the microphone. + await this.loadVolumeGroupManager(); + await this.audioVolumeGroupManager.setMicrophoneMute(false).then(() => { + console.info('setMicrophoneMute to not mute.'); + }); + } + async test(){ + await this.on(); + await this.isMicrophoneMute(); + await this.setMicrophoneMuteTrue(); + await this.isMicrophoneMute(); + await this.setMicrophoneMuteFalse(); + await this.isMicrophoneMute(); + await this.setMicrophoneMuteTrue(); + await this.isMicrophoneMute(); + } +} +``` diff --git a/en/application-dev/media/opensles-capture.md b/en/application-dev/media/opensles-capture.md deleted file mode 100644 index 3c33b37076ac14d98b550ba7b1a7e36bfe1cb048..0000000000000000000000000000000000000000 --- a/en/application-dev/media/opensles-capture.md +++ /dev/null @@ -1,151 +0,0 @@ -# OpenSL ES Audio Recording Development - -## Introduction - -You can use OpenSL ES to develop the audio recording function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned. - -## How to Develop - -To use OpenSL ES to develop the audio recording function in OpenHarmony, perform the following steps: - -1. Add the header files. - - ```c++ - #include - #include - #include - ``` - -2. Use the **slCreateEngine** API to create and instantiate the **engine** instance. - - ```c++ - SLObjectItf engineObject = nullptr; - slCreateEngine(&engineObject, 0, nullptr, 0, nullptr, nullptr); - (*engineObject)->Realize(engineObject, SL_BOOLEAN_FALSE); - ``` - -3. Obtain the **engineEngine** instance of the **SL_IID_ENGINE** interface. - - ```c++ - SLEngineItf engineItf = nullptr; - result = (*engineObject)->GetInterface(engineObject, SL_IID_ENGINE, &engineItf); - ``` - -4. Configure the recorder information (including the input source **audiosource** and output source **audiosink**), and create a **pcmCapturerObject** instance. - - ```c++ - SLDataLocator_IODevice io_device = { - SL_DATALOCATOR_IODEVICE, - SL_IODEVICE_AUDIOINPUT, - SL_DEFAULTDEVICEID_AUDIOINPUT, - NULL - }; - - SLDataSource audioSource = { - &io_device, - NULL - }; - - SLDataLocator_BufferQueue buffer_queue = { - SL_DATALOCATOR_BUFFERQUEUE, - 3 - }; - - // Configure the parameters based on the audio file format. - SLDataFormat_PCM format_pcm = { - SL_DATAFORMAT_PCM, // Input audio format. - 1, // Mono channel. - SL_SAMPLINGRATE_44_1, // Sampling rate, 44100 Hz. - SL_PCMSAMPLEFORMAT_FIXED_16, // Audio sampling format, a signed 16-bit integer in little-endian format. - 0, - 0, - 0 - }; - - SLDataSink audioSink = { - &buffer_queue, - &format_pcm - }; - - SLObjectItf pcmCapturerObject = nullptr; - result = (*engineItf)->CreateAudioRecorder(engineItf, &pcmCapturerObject, - &audioSource, &audioSink, 0, nullptr, nullptr); - (*pcmCapturerObject)->Realize(pcmCapturerObject, SL_BOOLEAN_FALSE); - ``` - -5. Obtain the **recordItf** instance of the **SL_IID_RECORD** interface. - - ```c++ - SLRecordItf recordItf; - (*pcmCapturerObject)->GetInterface(pcmCapturerObject, SL_IID_RECORD, &recordItf); - ``` - -6. Obtain the **bufferQueueItf** instance of the **SL_IID_OH_BUFFERQUEUE** interface. - - ```c++ - SLOHBufferQueueItf bufferQueueItf; - (*pcmCapturerObject)->GetInterface(pcmCapturerObject, SL_IID_OH_BUFFERQUEUE, &bufferQueueItf); - ``` - -7. Register the **BufferQueueCallback** function. - - ```c++ - static void BufferQueueCallback(SLOHBufferQueueItf bufferQueueItf, void *pContext, SLuint32 size) - { - AUDIO_INFO_LOG("BufferQueueCallback"); - FILE *wavFile = (FILE *)pContext; - if (wavFile != nullptr) { - SLuint8 *buffer = nullptr; - SLuint32 pSize = 0; - (*bufferQueueItf)->GetBuffer(bufferQueueItf, &buffer, pSize); - if (buffer != nullptr) { - fwrite(buffer, 1, pSize, wavFile); - (*bufferQueueItf)->Enqueue(bufferQueueItf, buffer, size); - } - } - - return; - } - - // Set wavFile_ to the descriptor of the file to be recorded. - (*bufferQueueItf)->RegisterCallback(bufferQueueItf, BufferQueueCallback, wavFile_); - ``` - -8. Start audio recording. - - ```c++ - static void CaptureStart(SLRecordItf recordItf, SLOHBufferQueueItf bufferQueueItf, FILE *wavFile) - { - AUDIO_INFO_LOG("CaptureStart"); - (*recordItf)->SetRecordState(recordItf, SL_RECORDSTATE_RECORDING); - if (wavFile != nullptr) { - SLuint8* buffer = nullptr; - SLuint32 pSize = 0; - (*bufferQueueItf)->GetBuffer(bufferQueueItf, &buffer, pSize); - if (buffer != nullptr) { - AUDIO_INFO_LOG("CaptureStart, enqueue buffer length: %{public}lu.", pSize); - fwrite(buffer, 1, pSize, wavFile); - (*bufferQueueItf)->Enqueue(bufferQueueItf, buffer, pSize); - } else { - AUDIO_INFO_LOG("CaptureStart, buffer is null or pSize: %{public}lu.", pSize); - } - } - - return; - } - ``` - -9. Stop audio recording. - - ```c++ - static void CaptureStop(SLRecordItf recordItf) - { - AUDIO_INFO_LOG("Enter CaptureStop"); - fflush(wavFile_); - (*recordItf)->SetRecordState(recordItf, SL_RECORDSTATE_STOPPED); - (*pcmCapturerObject)->Destroy(pcmCapturerObject); - fclose(wavFile_); - wavFile_ = nullptr; - return; - } - ``` diff --git a/en/application-dev/media/opensles-playback.md b/en/application-dev/media/opensles-playback.md deleted file mode 100644 index fe89bc9553da3163e1e18ca43922ff99e13c1307..0000000000000000000000000000000000000000 --- a/en/application-dev/media/opensles-playback.md +++ /dev/null @@ -1,104 +0,0 @@ -# OpenSL ES Audio Playback Development - -## Introduction - -You can use OpenSL ES to develop the audio playback function in OpenHarmony. Currently, only some [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) are implemented. If an API that has not been implemented is called, **SL_RESULT_FEATURE_UNSUPPORTED** will be returned. - -## How to Develop - -To use OpenSL ES to develop the audio playback function in OpenHarmony, perform the following steps: - -1. Add the header files. - - ```c++ - #include - #include - #include - ``` - -2. Use the **slCreateEngine** API to obtain an **engine** instance. - - ```c++ - SLObjectItf engineObject = nullptr; - slCreateEngine(&engineObject, 0, nullptr, 0, nullptr, nullptr); - (*engineObject)->Realize(engineObject, SL_BOOLEAN_FALSE); - ``` - -3. Obtain the **engineEngine** instance of the **SL_IID_ENGINE** interface. - - ```c++ - SLEngineItf engineEngine = nullptr; - (*engineObject)->GetInterface(engineObject, SL_IID_ENGINE, &engineEngine); - ``` - -4. Configure the player and create an **AudioPlayer** instance. - - ```c++ - SLDataLocator_BufferQueue slBufferQueue = { - SL_DATALOCATOR_BUFFERQUEUE, - 0 - }; - - // Configure the parameters based on the audio file format. - SLDataFormat_PCM pcmFormat = { - SL_DATAFORMAT_PCM, - 2, - 48000, - 16, - 0, - 0, - 0 - }; - SLDataSource slSource = {&slBufferQueue, &pcmFormat}; - - SLObjectItf pcmPlayerObject = nullptr; - (*engineEngine)->CreateAudioPlayer(engineEngine, &pcmPlayerObject, &slSource, null, 0, nullptr, nullptr); - (*pcmPlayerObject)->Realize(pcmPlayerObject, SL_BOOLEAN_FALSE); - ``` - -5. Obtain the **bufferQueueItf** instance of the **SL_IID_OH_BUFFERQUEUE** interface. - - ```c++ - SLOHBufferQueueItf bufferQueueItf; - (*pcmPlayerObject)->GetInterface(pcmPlayerObject, SL_IID_OH_BUFFERQUEUE, &bufferQueueItf); - ``` - -6. Open an audio file and register the **BufferQueueCallback** function. - - ```c++ - FILE *wavFile_ = nullptr; - - static void BufferQueueCallback (SLOHBufferQueueItf bufferQueueItf, void *pContext, SLuint32 size) - { - FILE *wavFile = (FILE *)pContext; - if (!feof(wavFile)) { - SLuint8 *buffer = nullptr; - SLuint32 pSize = 0; - (*bufferQueueItf)->GetBuffer(bufferQueueItf, &buffer, pSize); - // Read data from the file. - fread(buffer, 1, size, wavFile); - (*bufferQueueItf)->Enqueue(bufferQueueItf, buffer, size); - } - return; - } - - // Set wavFile_ to the descriptor of the file to be played. - wavFile_ = fopen(path, "rb"); - (*bufferQueueItf)->RegisterCallback(bufferQueueItf, BufferQueueCallback, wavFile_); - ``` - -7. Obtain the **playItf** instance of the **SL_PLAYSTATE_PLAYING** interface and start playback. - - ```c++ - SLPlayItf playItf = nullptr; - (*pcmPlayerObject)->GetInterface(pcmPlayerObject, SL_IID_PLAY, &playItf); - (*playItf)->SetPlayState(playItf, SL_PLAYSTATE_PLAYING); - ``` - -8. Stop audio playback. - - ```c++ - (*playItf)->SetPlayState(playItf, SL_PLAYSTATE_STOPPED); - (*pcmPlayerObject)->Destroy(pcmPlayerObject); - (*engineObject)->Destroy(engineObject); - ``` diff --git a/en/application-dev/media/public_sys-resources/icon-caution.gif b/en/application-dev/media/public_sys-resources/icon-caution.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/public_sys-resources/icon-caution.gif and /dev/null differ diff --git a/en/application-dev/media/public_sys-resources/icon-danger.gif b/en/application-dev/media/public_sys-resources/icon-danger.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/public_sys-resources/icon-danger.gif and /dev/null differ diff --git a/en/application-dev/media/public_sys-resources/icon-note.gif b/en/application-dev/media/public_sys-resources/icon-note.gif deleted file mode 100644 index 6314297e45c1de184204098efd4814d6dc8b1cda..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/public_sys-resources/icon-note.gif and /dev/null differ diff --git a/en/application-dev/media/public_sys-resources/icon-notice.gif b/en/application-dev/media/public_sys-resources/icon-notice.gif deleted file mode 100644 index 86024f61b691400bea99e5b1f506d9d9aef36e27..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/public_sys-resources/icon-notice.gif and /dev/null differ diff --git a/en/application-dev/media/public_sys-resources/icon-tip.gif b/en/application-dev/media/public_sys-resources/icon-tip.gif deleted file mode 100644 index 93aa72053b510e456b149f36a0972703ea9999b7..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/public_sys-resources/icon-tip.gif and /dev/null differ diff --git a/en/application-dev/media/public_sys-resources/icon-warning.gif b/en/application-dev/media/public_sys-resources/icon-warning.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/en/application-dev/media/public_sys-resources/icon-warning.gif and /dev/null differ diff --git a/en/application-dev/media/remote-camera.md b/en/application-dev/media/remote-camera.md deleted file mode 100644 index d7bf710279c1504cd9703eca9af7cf5433cb3dac..0000000000000000000000000000000000000000 --- a/en/application-dev/media/remote-camera.md +++ /dev/null @@ -1,65 +0,0 @@ -# Distributed Camera Development - -## When to Use - -You can call the APIs provided by the **Camera** module to develop a distributed camera that provides the basic camera functions such as shooting and video recording. - -## How to Develop -Connect your calculator to a distributed device. Your calculator will call **getSupportedCameras()** to obtain the camera list and traverse the returned camera list to check **ConnectionType** of the **Camera** objects. If **ConnectionType** of a **Camera** object is **CAMERA_CONNECTION_REMOTE**, your calculator will use this object to create a **cameraInput** object. The subsequent call process is the same as that of the local camera development. For details about the local camera development, see [Camera Development](./camera.md). - -For details about the APIs, see [Camera Management](../reference/apis/js-apis-camera.md). - -### Connecting to a Distributed Camera - -Connect the calculator and the distributed device to the same LAN. - -Open the calculator and click the arrow icon in the upper right corner. A new window is displayed. Enter the verification code as prompted, and the calculator will be connected to the distributed device. - -### Creating an Instance - -```js -import camera from '@ohos.multimedia.camera' -import image from '@ohos.multimedia.image' -import media from '@ohos.multimedia.media' -import featureAbility from '@ohos.ability.featureAbility' - -// Create a CameraManager object. -let cameraManager = camera.getCameraManager(globalThis.Context) -if (!cameraManager) { - console.error("camera.getCameraManager error") - return; -} - -// Register a callback to listen for camera status changes and obtain the updated camera status information. -cameraManager.on('cameraStatus', (cameraStatusInfo) => { - console.log('camera : ' + cameraStatusInfo.camera.cameraId); - console.log('status: ' + cameraStatusInfo.status); -}) - -// Obtain the camera list. -let remoteCamera -let cameraArray = cameraManager.getSupportedCameras(); -if (cameraArray.length <= 0) { - console.error("cameraManager.getSupportedCameras error") - return; -} - -for(let cameraIndex = 0; cameraIndex < cameraArray.length; cameraIndex++) { - console.log('cameraId : ' + cameraArray[cameraIndex].cameraId) // Obtain the camera ID. - console.log('cameraPosition : ' + cameraArray[cameraIndex].cameraPosition) // Obtain the camera position. - console.log('cameraType : ' + cameraArray[cameraIndex].cameraType) // Obtain the camera type. - console.log('connectionType : ' + cameraArray[cameraIndex].connectionType) // Obtain the camera connection type. - if (cameraArray[cameraIndex].connectionType == CAMERA_CONNECTION_REMOTE) { - remoteCamera = cameraArray[cameraIndex] - } -} - -// Create a camera input stream. -let cameraInput -try { - cameraInput = cameraManager.createCameraInput(remoteCamera); -} catch () { - console.error('Failed to createCameraInput errorCode = ' + error.code); -} -``` -For details about the subsequent steps, see [Camera Development](./camera.md). diff --git a/en/application-dev/media/using-audiocapturer-for-recording.md b/en/application-dev/media/using-audiocapturer-for-recording.md new file mode 100644 index 0000000000000000000000000000000000000000..87d13fa3f749cb18ba1c9d61843b750a36a1bcad --- /dev/null +++ b/en/application-dev/media/using-audiocapturer-for-recording.md @@ -0,0 +1,211 @@ +# Using AudioCapturer for Audio Recording + +The AudioCapturer is used to record Pulse Code Modulation (PCM) audio data. It is suitable if you have extensive audio development experience and want to implement more flexible recording features. + +## Development Guidelines + +The full recording process involves creating an **AudioCapturer** instance, configuring audio recording parameters, starting and stopping recording, and releasing the instance. In this topic, you will learn how to use the AudioCapturer to recording audio data. Before the development, you are advised to read [AudioCapturer](../reference/apis/js-apis-audio.md#audiocapturer8) for the API reference. + +The figure below shows the state changes of the AudioCapturer. After an **AudioCapturer** instance is created, different APIs can be called to switch the AudioCapturer to different states and trigger the required behavior. If an API is called when the AudioCapturer is not in the given state, the system may throw an exception or generate other undefined behavior. Therefore, you are advised to check the AudioCapturer state before triggering state transition. + +**Figure 1** AudioCapturer state transition +![AudioCapturer state change](figures/audiocapturer-status-change.png) + +You can call **on('stateChange')** to listen for state changes. For details about each state, see [AudioState](../reference/apis/js-apis-audio.md#audiostate8). + +### How to Develop + +1. Set audio recording parameters and create an **AudioCapturer** instance. For details about the parameters, see [AudioCapturerOptions](../reference/apis/js-apis-audio.md#audiocaptureroptions8). + + ```ts + import audio from '@ohos.multimedia.audio'; + + let audioStreamInfo = { + samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100, + channels: audio.AudioChannel.CHANNEL_2, + sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, + encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW + }; + + let audioCapturerInfo = { + source: audio.SourceType.SOURCE_TYPE_MIC, + capturerFlags: 0 + }; + + let audioCapturerOptions = { + streamInfo: audioStreamInfo, + capturerInfo: audioCapturerInfo + }; + + audio.createAudioCapturer(audioCapturerOptions, (err, data) => { + if (err) { + console.error(`Invoke createAudioCapturer failed, code is ${err.code}, message is ${err.message}`); + } else { + console.info('Invoke createAudioCapturer succeeded.'); + let audioCapturer = data; + } + }); + ``` + +2. Call **start()** to switch the AudioCapturer to the **running** state and start recording. + + ```ts + audioCapturer.start((err) => { + if (err) { + console.error(`Capturer start failed, code is ${err.code}, message is ${err.message}`); + } else { + console.info('Capturer start success.'); + } + }); + ``` + +3. Specify the recording file path and call **read()** to read the data in the buffer. + + ```ts + let file = fs.openSync(path, 0o2 | 0o100); + let bufferSize = await audioCapturer.getBufferSize(); + let buffer = await audioCapturer.read(bufferSize, true); + fs.writeSync(file.fd, buffer); + ``` + +4. Call **stop()** to stop recording. + + ```ts + audioCapturer.stop((err) => { + if (err) { + console.error(`Capturer stop failed, code is ${err.code}, message is ${err.message}`); + } else { + console.info('Capturer stopped.'); + } + }); + ``` + +5. Call **release()** to release the instance. + + ```ts + audioCapturer.release((err) => { + if (err) { + console.error(`capturer release failed, code is ${err.code}, message is ${err.message}`); + } else { + console.info('capturer released.'); + } + }); + ``` + + +### Sample Code + +Refer to the sample code below to record audio using AudioCapturer. + +```ts +import audio from '@ohos.multimedia.audio'; +import fs from '@ohos.file.fs'; + +const TAG = 'AudioCapturerDemo'; + +export default class AudioCapturerDemo { + private audioCapturer = undefined; + private audioStreamInfo = { + samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100, + channels: audio.AudioChannel.CHANNEL_1, + sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, + encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW + } + private audioCapturerInfo = { + source: audio.SourceType.SOURCE_TYPE_MIC, // Audio source type. + capturerFlags: 0 // Flag indicating an AudioCapturer. + } + private audioCapturerOptions = { + streamInfo: this.audioStreamInfo, + capturerInfo: this.audioCapturerInfo + } + + // Create an AudioCapturer instance, and set the events to listen for. + init() { + audio.createAudioCapturer(this.audioCapturerOptions, (err, capturer) => { // Create an AudioCapturer instance. + if (err) { + console.error(`Invoke createAudioCapturer failed, code is ${err.code}, message is ${err.message}`); + return; + } + + console.info(`${TAG}: create AudioCapturer success`); + this.audioCapturer = capturer; + this.audioCapturer.on('markReach', 1000, (position) => { // Subscribe to the markReach event. A callback is triggered when the number of captured frames reaches 1000. + if (position === 1000) { + console.info('ON Triggered successfully'); + } + }); + this.audioCapturer.on('periodReach', 2000, (position) => { // Subscribe to the periodReach event. A callback is triggered when the number of captured frames reaches 2000. + if (position === 2000) { + console.info('ON Triggered successfully'); + } + }); + + }); + } + + // Start audio recording. + async start() { + let stateGroup = [audio.AudioState.STATE_PREPARED, audio.AudioState.STATE_PAUSED, audio.AudioState.STATE_STOPPED]; + if (stateGroup.indexOf(this.audioCapturer.state) === -1) { // Recording can be started only when the AudioCapturer is in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state. + console.error(`${TAG}: start failed`); + return; + } + await this.audioCapturer.start(); // Start recording. + + let context = getContext(this); + const path = context.filesDir + '/test.wav'; // Path for storing the recorded audio file. + + let file = fs.openSync(path, 0o2 | 0o100); // Create the file if it does not exist. + let fd = file.fd; + let numBuffersToCapture = 150; // Write data for 150 times. + let count = 0; + while (numBuffersToCapture) { + let bufferSize = await this.audioCapturer.getBufferSize(); + let buffer = await this.audioCapturer.read(bufferSize, true); + let options = { + offset: count * bufferSize, + length: bufferSize + }; + if (buffer === undefined) { + console.error(`${TAG}: read buffer failed`); + } else { + let number = fs.writeSync(fd, buffer, options); + console.info(`${TAG}: write date: ${number}`); + } + numBuffersToCapture--; + count++; + } + } + + // Stop recording. + async stop() { + // The AudioCapturer can be stopped only when it is in the STATE_RUNNING or STATE_PAUSED state. + if (this.audioCapturer.state !== audio.AudioState.STATE_RUNNING && this.audioCapturer.state !== audio.AudioState.STATE_PAUSED) { + console.info('Capturer is not running or paused'); + return; + } + await this.audioCapturer.stop(); // Stop recording. + if (this.audioCapturer.state === audio.AudioState.STATE_STOPPED) { + console.info('Capturer stopped'); + } else { + console.error('Capturer stop failed'); + } + } + + // Release the instance. + async release() { + // The AudioCapturer can be released only when it is not in the STATE_RELEASED or STATE_NEW state. + if (this.audioCapturer.state === audio.AudioState.STATE_RELEASED || this.audioCapturer.state === audio.AudioState.STATE_NEW) { + console.info('Capturer already released'); + return; + } + await this.audioCapturer.release(); // Release the instance. + if (this.audioCapturer.state == audio.AudioState.STATE_RELEASED) { + console.info('Capturer released'); + } else { + console.error('Capturer release failed'); + } + } +} +``` diff --git a/en/application-dev/media/using-audiorenderer-for-playback.md b/en/application-dev/media/using-audiorenderer-for-playback.md new file mode 100644 index 0000000000000000000000000000000000000000..11934e669813fa7a89ceef43bd2c3795db6bad75 --- /dev/null +++ b/en/application-dev/media/using-audiorenderer-for-playback.md @@ -0,0 +1,268 @@ +# Using AudioRenderer for Audio Playback + +The AudioRenderer is used to play Pulse Code Modulation (PCM) audio data. Unlike the AVPlayer, the AudioRenderer can perform data preprocessing before audio input. Therefore, the AudioRenderer is more suitable if you have extensive audio development experience and want to implement more flexible playback features. + +## Development Guidelines + +The full rendering process involves creating an **AudioRenderer** instance, configuring audio rendering parameters, starting and stopping rendering, and releasing the instance. In this topic, you will learn how to use the AudioRenderer to render audio data. Before the development, you are advised to read [AudioRenderer](../reference/apis/js-apis-audio.md#audiorenderer8) for the API reference. + +The figure below shows the state changes of the AudioRenderer. After an **AudioRenderer** instance is created, different APIs can be called to switch the AudioRenderer to different states and trigger the required behavior. If an API is called when the AudioRenderer is not in the given state, the system may throw an exception or generate other undefined behavior. Therefore, you are advised to check the AudioRenderer state before triggering state transition. + +To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the callback functions. + +**Figure 1** AudioRenderer state transition + +![AudioRenderer state transition](figures/audiorenderer-status-change.png) + +During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the AudioRenderer. This is because some operations can be performed only when the AudioRenderer is in a given state. If the application performs an operation when the AudioRenderer is not in the given state, the system may throw an exception or generate other undefined behavior. + +- **prepared**: The AudioRenderer enters this state by calling **createAudioRenderer()**. + +- **running**: The AudioRenderer enters this state by calling **start()** when it is in the **prepared**, **paused**, or **stopped** state. + +- **paused**: The AudioRenderer enters this state by calling **pause()** when it is in the **running** state. When the audio playback is paused, it can call **start()** to resume the playback. + +- **stopped**: The AudioRenderer enters this state by calling **stop()** when it is in the **paused** or **running** state + +- **released**: The AudioRenderer enters this state by calling **release()** when it is in the **prepared**, **paused**, or **stopped** state. In this state, the AudioRenderer releases all occupied hardware and software resources and will not transit to any other state. + +### How to Develop + +1. Set audio rendering parameters and create an **AudioRenderer** instance. For details about the parameters, see [AudioRendererOptions](../reference/apis/js-apis-audio.md#audiorendereroptions8). + + ```ts + import audio from '@ohos.multimedia.audio'; + + let audioStreamInfo = { + samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100, + channels: audio.AudioChannel.CHANNEL_1, + sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, + encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW + }; + + let audioRendererInfo = { + content: audio.ContentType.CONTENT_TYPE_SPEECH, + usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION, + rendererFlags: 0 + }; + + let audioRendererOptions = { + streamInfo: audioStreamInfo, + rendererInfo: audioRendererInfo + }; + + audio.createAudioRenderer(audioRendererOptions, (err, data) => { + if (err) { + console.error(`Invoke createAudioRenderer failed, code is ${err.code}, message is ${err.message}`); + return; + } else { + console.info('Invoke createAudioRenderer succeeded.'); + let audioRenderer = data; + } + }); + ``` + +2. Call **start()** to switch the AudioRenderer to the **running** state and start rendering. + + ```ts + audioRenderer.start((err) => { + if (err) { + console.error(`Renderer start failed, code is ${err.code}, message is ${err.message}`); + } else { + console.info('Renderer start success.'); + } + }); + ``` + +3. Specify the address of the file to render. Open the file and call **write()** to continuously write audio data to the buffer for rendering and playing. To implement personalized playback, process the audio data before writing it. + + ```ts + const bufferSize = await audioRenderer.getBufferSize(); + let file = fs.openSync(filePath, fs.OpenMode.READ_ONLY); + let buf = new ArrayBuffer(bufferSize); + let readsize = await fs.read(file.fd, buf); + let writeSize = await new Promise((resolve, reject) => { + audioRenderer.write(buf, (err, writeSize) => { + if (err) { + reject(err); + } else { + resolve(writeSize); + } + }); + }); + ``` + +4. Call **stop()** to stop rendering. + + ```ts + audioRenderer.stop((err) => { + if (err) { + console.error(`Renderer stop failed, code is ${err.code}, message is ${err.message}`); + } else { + console.info('Renderer stopped.'); + } + }); + ``` + +5. Call **release()** to release the instance. + + ```ts + audioRenderer.release((err) => { + if (err) { + console.error(`Renderer release failed, code is ${err.code}, message is ${err.message}`); + } else { + console.info('Renderer released.'); + } + }); + ``` + +### Sample Code + +Refer to the sample code below to render an audio file using AudioRenderer. + +```ts +import audio from '@ohos.multimedia.audio'; +import fs from '@ohos.file.fs'; + +const TAG = 'AudioRendererDemo'; + +export default class AudioRendererDemo { + private renderModel = undefined; + private audioStreamInfo = { + samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_48000, // Sampling rate. + channels: audio.AudioChannel.CHANNEL_2, // Channel. + sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, // Sampling format. + encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW // Encoding format. + } + private audioRendererInfo = { + content: audio.ContentType.CONTENT_TYPE_MUSIC, // Media type. + usage: audio.StreamUsage.STREAM_USAGE_MEDIA, // Audio stream usage type. + rendererFlags: 0 // AudioRenderer flag. + } + private audioRendererOptions = { + streamInfo: this.audioStreamInfo, + rendererInfo: this.audioRendererInfo + } + + // Create an AudioRenderer instance, and set the events to listen for. + init() { + audio.createAudioRenderer(this.audioRendererOptions, (err, renderer) => { // Create an AudioRenderer instance. + if (!err) { + console.info(`${TAG}: creating AudioRenderer success`); + this.renderModel = renderer; + this.renderModel.on('stateChange', (state) => { // Set the events to listen for. A callback is invoked when the AudioRenderer is switched to the specified state. + if (state == 1) { + console.info('audio renderer state is: STATE_PREPARED'); + } + if (state == 2) { + console.info('audio renderer state is: STATE_RUNNING'); + } + }); + this.renderModel.on('markReach', 1000, (position) => { // Subscribe to the markReach event. A callback is triggered when the number of rendered frames reaches 1000. + if (position == 1000) { + console.info('ON Triggered successfully'); + } + }); + } else { + console.info(`${TAG}: creating AudioRenderer failed, error: ${err.message}`); + } + }); + } + + // Start audio rendering. + async start() { + let stateGroup = [audio.AudioState.STATE_PREPARED, audio.AudioState.STATE_PAUSED, audio.AudioState.STATE_STOPPED]; + if (stateGroup.indexOf(this.renderModel.state) === -1) { // Rendering can be started only when the AudioRenderer is in the prepared, paused, or stopped state. + console.error(TAG + 'start failed'); + return; + } + await this.renderModel.start(); // Start rendering. + + const bufferSize = await this.renderModel.getBufferSize(); + let context = getContext(this); + let path = context.filesDir; + const filePath = path + '/test.wav'; // Use the sandbox path to obtain the file. The actual file path is /data/storage/el2/base/haps/entry/files/test.wav. + + let file = fs.openSync(filePath, fs.OpenMode.READ_ONLY); + let stat = await fs.stat(filePath); + let buf = new ArrayBuffer(bufferSize); + let len = stat.size % bufferSize === 0 ? Math.floor(stat.size / bufferSize) : Math.floor(stat.size / bufferSize + 1); + for (let i = 0; i < len; i++) { + let options = { + offset: i * bufferSize, + length: bufferSize + }; + let readsize = await fs.read(file.fd, buf, options); + + // buf indicates the audio data to be written to the buffer. Before calling AudioRenderer.write(), you can preprocess the audio data for personalized playback. The AudioRenderer reads the audio data written to the buffer for rendering. + + let writeSize = await new Promise((resolve, reject) => { + this.renderModel.write(buf, (err, writeSize) => { + if (err) { + reject(err); + } else { + resolve(writeSize); + } + }); + }); + if (this.renderModel.state === audio.AudioState.STATE_RELEASED) { // The rendering stops if the AudioRenderer is in the released state. + fs.close(file); + await this.renderModel.stop(); + } + if (this.renderModel.state === audio.AudioState.STATE_RUNNING) { + if (i === len - 1) { // The rendering stops if the file finishes reading. + fs.close(file); + await this.renderModel.stop(); + } + } + } + } + + // Pause the rendering. + async pause() { + // Rendering can be paused only when the AudioRenderer is in the running state. + if (this.renderModel.state !== audio.AudioState.STATE_RUNNING) { + console.info('Renderer is not running'); + return; + } + await this.renderModel.pause(); // Pause rendering. + if (this.renderModel.state === audio.AudioState.STATE_PAUSED) { + console.info('Renderer is paused.'); + } else { + console.error('Pausing renderer failed.'); + } + } + + // Stop rendering. + async stop() { + // Rendering can be stopped only when the AudioRenderer is in the running or paused state. + if (this.renderModel.state !== audio.AudioState.STATE_RUNNING && this.renderModel.state !== audio.AudioState.STATE_PAUSED) { + console.info('Renderer is not running or paused.'); + return; + } + await this.renderModel.stop(); // Stop rendering. + if (this.renderModel.state === audio.AudioState.STATE_STOPPED) { + console.info('Renderer stopped.'); + } else { + console.error('Stopping renderer failed.'); + } + } + + // Release the instance. + async release() { + // The AudioRenderer can be released only when it is not in the released state. + if (this.renderModel.state === audio.AudioState.STATE_RELEASED) { + console.info('Renderer already released'); + return; + } + await this.renderModel.release(); // Release the instance. + if (this.renderModel.state === audio.AudioState.STATE_RELEASED) { + console.info('Renderer released'); + } else { + console.error('Renderer release failed.'); + } + } +} +``` + +When audio streams with the same or higher priority need to use the output device, the current audio playback will be interrupted. The application can respond to and handle the interruption event. For details about how to process concurrent audio playback, see [Audio Playback Concurrency Policies](audio-playback-concurrency.md). diff --git a/en/application-dev/media/using-avplayer-for-playback.md b/en/application-dev/media/using-avplayer-for-playback.md new file mode 100644 index 0000000000000000000000000000000000000000..e81d43da6ea25beb5c94d3c5d052f6926b1d027b --- /dev/null +++ b/en/application-dev/media/using-avplayer-for-playback.md @@ -0,0 +1,165 @@ +# Using AVPlayer for Audio Playback + +The AVPlayer is used to play raw media assets in an end-to-end manner. In this topic, you will learn how to use the AVPlayer to play a complete piece of music. + +If you want the application to continue playing the music in the background or when the screen is off, you must use the [AVSession](avsession-overview.md) and [continuous task](../task-management/continuous-task-dev-guide.md) to prevent the playback from being forcibly interrupted by the system. + + +The full playback process includes creating an **AVPlayer** instance, setting the media asset to play, setting playback parameters (volume, speed, and focus mode), controlling playback (play, pause, seek, and stop), resetting the playback configuration, and releasing the instance. + + +During application development, you can use the **state** attribute of the AVPlayer to obtain the AVPlayer state or call **on('stateChange')** to listen for state changes. If the application performs an operation when the AVPlayer is not in the given state, the system may throw an exception or generate other undefined behavior. + + +**Figure 1** Playback state transition +![Playback state change](figures/playback-status-change.png) + +For details about the state, see [AVPlayerState](../reference/apis/js-apis-media.md#avplayerstate9). When the AVPlayer is in the **prepared**, **playing**, **paused**, or **completed** state, the playback engine is working and a large amount of RAM is occupied. If your application does not need to use the AVPlayer, call **reset()** or **release()** to release the instance. + +## How to Develop + +Read [AVPlayer](../reference/apis/js-apis-media.md#avplayer9) for the API reference. + +1. Call **createAVPlayer()** to create an **AVPlayer** instance. The AVPlayer is the **idle** state. + +2. Set the events to listen for, which will be used in the full-process scenario. The table below lists the supported events. + | Event Type| Description| + | -------- | -------- | + | stateChange | Mandatory; used to listen for changes of the **state** attribute of the AVPlayer.| + | error | Mandatory; used to listen for AVPlayer errors.| + | durationUpdate | Used to listen for progress bar updates to refresh the media asset duration.| + | timeUpdate | Used to listen for the current position of the progress bar to refresh the current time.| + | seekDone | Used to listen for the completion status of the **seek()** request.
This event is reported when the AVPlayer seeks to the playback position specified in **seek()**.| + | speedDone | Used to listen for the completion status of the **setSpeed()** request.
This event is reported when the AVPlayer plays music at the speed specified in **setSpeed()**.| + | volumeChange | Used to listen for the completion status of the **setVolume()** request.
This event is reported when the AVPlayer plays music at the volume specified in **setVolume()**.| + | bufferingUpdate | Used to listen for network playback buffer information. This event reports the buffer percentage and playback progress.| + | audioInterrupt | Used to listen for audio interruption. This event is used together with the **audioInterruptMode** attribute.
This event is reported when the current audio playback is interrupted by another (for example, when a call is coming), so the application can process the event in time.| + +3. Set the media asset URL. The AVPlayer enters the **initialized** state. + > **NOTE** + > + > The URL in the code snippet below is for reference only. You need to check the media asset validity and set the URL based on service requirements. + > + > - If local files are used for playback, ensure that the files are available and the application sandbox path is used for access. For details about how to obtain the application sandbox path, see [Obtaining the Application Development Path](../application-models/application-context-stage.md#obtaining-the-application-development-path). For details about the application sandbox and how to push files to the application sandbox, see [File Management](../file-management/app-sandbox-directory.md). + > + > - If a network playback path is used, you must request the ohos.permission.INTERNET [permission](../security/accesstoken-guidelines.md). + > + > - You can also use **ResourceManager.getRawFd** to obtain the file descriptor of a file packed in the HAP file. For details, see [ResourceManager API Reference](../reference/apis/js-apis-resource-manager.md#getrawfd9). + > + > - The [playback formats and protocols](avplayer-avrecorder-overview.md#supported-formats-and-protocols) in use must be those supported by the system. + +4. Call **prepare()** to switch the AVPlayer to the **prepared** state. In this state, you can obtain the duration of the media asset to play and set the volume. + +5. Call **play()**, **pause()**, **seek()**, and **stop()** to perform audio playback control as required. + +6. (Optional) Call **reset()** to reset the AVPlayer. The AVPlayer enters the **idle** state again and you can change the media asset URL. + +7. Call **release()** to switch the AVPlayer to the **released** state. Now your application exits the playback. + +## Sample Code + +Refer to the sample code below to play a complete piece of music. + +```ts +import media from '@ohos.multimedia.media'; +import fs from '@ohos.file.fs'; +import common from '@ohos.app.ability.common'; + +export class AVPlayerDemo { + private avPlayer; + private count: number = 0; + + // Set AVPlayer callback functions. + setAVPlayerCallback() { + // Callback function for the seek operation. + this.avPlayer.on('seekDone', (seekDoneTime) => { + console.info(`AVPlayer seek succeeded, seek time is ${seekDoneTime}`); + }) + // Callback function for errors. If an error occurs during the operation on the AVPlayer, reset() is called to reset the AVPlayer. + this.avPlayer.on('error', (err) => { + console.error(`Invoke avPlayer failed, code is ${err.code}, message is ${err.message}`); + this.avPlayer.reset(); // Call reset() to reset the AVPlayer, which enters the idle state. + }) + // Callback function for state changes. + this.avPlayer.on('stateChange', async (state, reason) => { + switch (state) { + case 'idle': // This state is reported upon a successful callback of reset(). + console.info('AVPlayer state idle called.'); + this.avPlayer.release(); // Call release() to release the instance. + break; + case 'initialized': // This state is reported when the AVPlayer sets the playback source. + console.info('AVPlayerstate initialized called.'); + this.avPlayer.prepare().then(() => { + console.info('AVPlayer prepare succeeded.'); + }, (err) => { + console.error(`Invoke prepare failed, code is ${err.code}, message is ${err.message}`); + }); + break; + case 'prepared': // This state is reported upon a successful callback of prepare(). + console.info('AVPlayer state prepared called.'); + this.avPlayer.play(); // Call play() to start playback. + break; + case 'playing': // This state is reported upon a successful callback of play(). + console.info('AVPlayer state playing called.'); + if (this.count !== 0) { + console.info('AVPlayer start to seek.'); + this.avPlayer.seek (this.avPlayer.duration); // Call seek() to seek to the end of the audio clip. + } else { + this.avPlayer.pause(); // Call pause() to pause the playback. + } + this.count++; + break; + case 'paused': // This state is reported upon a successful callback of pause(). + console.info('AVPlayer state paused called.'); + this.avPlayer.play(); // Call play() again to start playback. + break; + case 'completed': // This state is reported upon the completion of the playback. + console.info('AVPlayer state completed called.'); + this.avPlayer.stop(); // Call stop() to stop the playback. + break; + case 'stopped': // This state is reported upon a successful callback of stop(). + console.info('AVPlayer state stopped called.'); + this.avPlayer.reset(); // Call reset() to reset the AVPlayer state. + break; + case 'released': + console.info('AVPlayer state released called.'); + break; + default: + console.info('AVPlayer state unknown called.'); + break; + } + }) + } + + // The following demo shows how to use the file system to open the sandbox address, obtain the media file address, and play the media file using the URL attribute. + async avPlayerUrlDemo() { + // Create an AVPlayer instance. + this.avPlayer = await media.createAVPlayer(); + // Set a callback function for state changes. + this.setAVPlayerCallback(); + let fdPath = 'fd://'; + // Obtain the sandbox address filesDir through UIAbilityContext. The stage model is used as an example. + let context = getContext(this) as common.UIAbilityContext; + let pathDir = context.filesDir; + let path = pathDir + '/01.mp3'; + // Open the corresponding file address to obtain the file descriptor and assign a value to the URL to trigger the reporting of the initialized state. + let file = await fs.open(path); + fdPath = fdPath + '' + file.fd; + this.avPlayer.url = fdPath; + } + + // The following demo shows how to use resourceManager to obtain the media file packed in the HAP file and play the media file by using the fdSrc attribute. + async avPlayerFdSrcDemo() { + // Create an AVPlayer instance. + this.avPlayer = await media.createAVPlayer(); + // Set a callback function for state changes. + this.setAVPlayerCallback(); + // Call getRawFd of the resourceManager member of UIAbilityContext to obtain the media asset URL. + // The return type is {fd,offset,length}, where fd indicates the file descriptor address of the HAP file, offset indicates the media asset offset, and length indicates the duration of the media asset to play. + let context = getContext(this) as common.UIAbilityContext; + let fileDescriptor = await context.resourceManager.getRawFd('01.mp3'); + // Assign a value to fdSrc to trigger the reporting of the initialized state. + this.avPlayer.fdSrc = fileDescriptor; + } +} +``` diff --git a/en/application-dev/media/using-avrecorder-for-recording.md b/en/application-dev/media/using-avrecorder-for-recording.md new file mode 100644 index 0000000000000000000000000000000000000000..5d814e86c92b49b48923de3d0366032c8f89c214 --- /dev/null +++ b/en/application-dev/media/using-avrecorder-for-recording.md @@ -0,0 +1,180 @@ +# Using AVRecorder for Audio Recording + +You will learn how to use the AVRecorder to develop audio recording functionalities including starting, pausing, resuming, and stopping recording. + +During application development, you can use the **state** attribute of the AVRecorder to obtain the AVRecorder state or call **on('stateChange')** to listen for state changes. Your code must meet the state machine requirements. For example, **pause()** is called only when the AVRecorder is in the **started** state, and **resume()** is called only when it is in the **paused** state. + +**Figure 1** Recording state transition + +![Recording state change](figures/recording-status-change.png) + +For details about the state, see [AVRecorderState](../reference/apis/js-apis-media.md#avrecorderstate9). + + +## How to Develop + +Read [AVRecorder](../reference/apis/js-apis-media.md#avrecorder9) for the API reference. + +1. Create an **AVRecorder** instance. The AVRecorder is the **idle** state. + + ```ts + import media from '@ohos.multimedia.media'; + + let avRecorder = undefined; + media.createAVRecorder().then((recorder) => { + avRecorder = recorder; + }, (err) => { + console.error(`Invoke createAVRecorder failed, code is ${err.code}, message is ${err.message}`); + }) + ``` + +2. Set the events to listen for. + | Event Type| Description| + | -------- | -------- | + | stateChange | Mandatory; used to listen for changes of the **state** attribute of the AVRecorder.| + | error | Mandatory; used to listen for AVRecorder errors.| + + + ```ts + // Callback function for state changes. + avRecorder.on('stateChange', (state, reason) => { + console.log(`current state is ${state}`); + // You can add the action to be performed after the state is switched. + }) + + // Callback function for errors. + avRecorder.on('error', (err) => { + console.error(`avRecorder failed, code is ${err.code}, message is ${err.message}`); + }) + ``` + +3. Set audio recording parameters and call **prepare()**. The AVRecorder enters the **prepared** state. + > **NOTE** + > + > Pay attention to the following when configuring parameters: + > + > - In pure audio recording scenarios, set only audio-related parameters in **avConfig** of **prepare()**. + > If video-related parameters are configured, an error will be reported in subsequent steps. If video recording is required, follow the instructions provided in [Video Recording Development](video-recording.md). + > + > - The [recording formats](avplayer-avrecorder-overview.md#supported-formats) in use must be those supported by the system. + > + > - The recording output URL (URL in **avConfig** in the sample code) must be in the format of fd://xx (where xx indicates a file descriptor). You must call [ohos.file.fs](../reference/apis/js-apis-file-fs.md) to implement access to the application file. For details, see [Application File Access and Management](../file-management/app-file-access.md). + + + ```ts + let avProfile = { + audioBitrate: 100000, // Audio bit rate. + audioChannels: 2, // Number of audio channels. + audioCodec: media.CodecMimeType.AUDIO_AAC, // Audio encoding format. Currently, only AAC is supported. + audioSampleRate: 48000, // Audio sampling rate. + fileFormat: media.ContainerFormatType.CFT_MPEG_4A, // Encapsulation format. Currently, only M4A is supported. + } + let avConfig = { + audioSourceType: media.AudioSourceType.AUDIO_SOURCE_TYPE_MIC, // Audio input source. In this example, the microphone is used. + profile: avProfile, + url: 'fd://35', // Obtain the file descriptor of the created audio file by referring to the sample code in Application File Access and Management. + } + avRecorder.prepare(avConfig).then(() => { + console.log('Invoke prepare succeeded.'); + }, (err) => { + console.error(`Invoke prepare failed, code is ${err.code}, message is ${err.message}`); + }) + ``` + +4. Call **start()** to start recording. The AVRecorder enters the **started** state. + +5. Call **pause()** to pause recording. The AVRecorder enters the **paused** state. + +6. Call **resume()** to resume recording. The AVRecorder enters the **started** state again. + +7. Call **stop()** to stop recording. The AVRecorder enters the **stopped** state. + +8. Call **reset()** to reset the resources. The AVRecorder enters the **idle** state. In this case, you can reconfigure the recording parameters. + +9. Call **release()** to switch the AVRecorder to the **released** state. Now your application exits the recording. + + +## Sample Code + + Refer to the sample code below to complete the process of starting, pausing, resuming, and stopping recording. + +```ts +import media from '@ohos.multimedia.media'; + +export class AudioRecorderDemo { + private avRecorder; + private avProfile = { + audioBitrate: 100000, // Audio bit rate. + audioChannels: 2, // Number of audio channels. + audioCodec: media.CodecMimeType.AUDIO_AAC, // Audio encoding format. Currently, only AAC is supported. + audioSampleRate: 48000, // Audio sampling rate. + fileFormat: media.ContainerFormatType.CFT_MPEG_4A, // Encapsulation format. Currently, only M4A is supported. + }; + private avConfig = { + audioSourceType: media.AudioSourceType.AUDIO_SOURCE_TYPE_MIC, // Audio input source. In this example, the microphone is used. + profile: this.avProfile, + url: 'fd://35', // Create, read, and write a file by referring to the sample code in Application File Access and Management. + }; + + // Set AVRecorder callback functions. + setAudioRecorderCallback() { + // Callback function for state changes. + this.avRecorder.on('stateChange', (state, reason) => { + console.log(`AudioRecorder current state is ${state}`); + }) + // Callback function for errors. + this.avRecorder.on('error', (err) => { + console.error(`AudioRecorder failed, code is ${err.code}, message is ${err.message}`); + }) + } + + // Process of starting recording. + async startRecordingProcess() { + // 1. Create an AVRecorder instance. + this.avRecorder = await media.createAVRecorder(); + this.setAudioRecorderCallback(); + // 2. Obtain the file descriptor of the recording file and assign it to the URL in avConfig. For details, see FilePicker. + // 3. Set recording parameters to complete the preparations. + await this.avRecorder.prepare(this.avConfig); + // 4. Start recording. + await this.avRecorder.start(); + } + + // Process of pausing recording. + async pauseRecordingProcess() { + if (this.avRecorder.state ==='started') { // pause() can be called only when the AVRecorder is in the started state . + await this.avRecorder.pause(); + } + } + + // Process of resuming recording. + async resumeRecordingProcess() { + if (this.avRecorder.state === 'paused') { // resume() can be called only when the AVRecorder is in the paused state . + await this.avRecorder.resume(); + } + } + + // Process of stopping recording. + async stopRecordingProcess() { + // 1. Stop recording. + if (this.avRecorder.state === 'started' + || this.avRecorder.state ==='paused') { // stop() can be called only when the AVRecorder is in the started or paused state. + await this.avRecorder.stop(); + } + // 2. Reset the AVRecorder. + await this.avRecorder.reset(); + // 3. Release the AVRecorder instance. + await this.avRecorder.release(); + // 4. Close the file descriptor of the recording file. + } + + // Complete sample code for starting, pausing, resuming, and stopping recording. + async audioRecorderDemo() { + await this.startRecordingProcess(); // Start recording. + // You can set the recording duration. For example, you can set the sleep mode to prevent code execution. + await this.pauseRecordingProcess(); // Pause recording. + await this.resumeRecordingProcess(); // Resume recording. + await this.stopRecordingProcess(); // Stop recording. + } +} +``` diff --git a/en/application-dev/media/using-avsession-controller.md b/en/application-dev/media/using-avsession-controller.md new file mode 100644 index 0000000000000000000000000000000000000000..5e4b69d8b48f5acad64f120892062e66d67c6b12 --- /dev/null +++ b/en/application-dev/media/using-avsession-controller.md @@ -0,0 +1,244 @@ +# AVSession Controller + +Media Controller preset in OpenHarmony functions as the controller to interact with audio and video applications, for example, obtaining and displaying media information and delivering control commands. + +You can develop a system application (for example, a new playback control center or voice assistant) as the controller to interact with audio and video applications in the system. + +## Basic Concepts + +- AVSessionDescriptor: session information, including the session ID, session type (audio/video), custom session name (**sessionTag**), information about the corresponding application (**elementName**), and whether the session is pined on top (isTopSession). + +- Top session: session with the highest priority in the system, for example, a session that is being played. Generally, the controller must hold an **AVSessionController** object to communicate with a session. However, the controller can directly communicate with the top session, for example, directly sending a control command or key event, without holding an **AVSessionController** object. + +## Available APIs + +The table below lists the key APIs used by the controller. The APIs use either a callback or promise to return the result. The APIs listed below use a callback. They provide the same functions as their counterparts that use a promise. + +For details, see [AVSession Management](../reference/apis/js-apis-avsession.md). + +| API| Description| +| -------- | -------- | +| getAllSessionDescriptors(callback: AsyncCallback<Array<Readonly<AVSessionDescriptor>>>): void | Obtains the descriptors of all AVSessions in the system.| +| createController(sessionId: string, callback: AsyncCallback<AVSessionController>): void | Creates an AVSessionController.| +| getValidCommands(callback: AsyncCallback<Array<AVControlCommandType>>): void | Obtains valid commands supported by the AVSession.
Control commands listened by an audio and video application when it accesses the AVSession are considered as valid commands supported by the AVSession. For details, see [Provider of AVSession](using-avsession-developer.md).| +| getLaunchAbility(callback: AsyncCallback<WantAgent>): void | Obtains the UIAbility that is configured in the AVSession and can be started.
The UIAbility configured here is started when a user operates the UI of the controller, for example, clicking a widget in Media Controller.| +| sendAVKeyEvent(event: KeyEvent, callback: AsyncCallback<void>): void | Sends a key event to an AVSession through the AVSessionController object.| +| sendSystemAVKeyEvent(event: KeyEvent, callback: AsyncCallback<void>): void | Sends a key event to the top session.| +| sendControlCommand(command: AVControlCommand, callback: AsyncCallback<void>): void | Sends a control command to an AVSession through the AVSessionController object.| +| sendSystemControlCommand(command: AVControlCommand, callback: AsyncCallback<void>): void | Sends a control command to the top session.| + +## How to Develop + +To enable a system application to access the AVSession service as a controller, proceed as follows: + +1. Obtain **AVSessionDescriptor** through AVSessionManager and create an **AVSessionController** object. + The controller may obtain all **AVSessionDescriptor**s in the current system, and create an **AVSessionController** object for each session, so as to perform unified playback control on all the audio and video applications. + + ```ts + // Import the AVSessionManager module. + import AVSessionManager from '@ohos.multimedia.avsession'; + + // Define global variables. + let g_controller = new Array(); + let g_centerSupportCmd:Set = new Set(['play', 'pause', 'playNext', 'playPrevious', 'fastForward', 'rewind', 'seek','setSpeed', 'setLoopMode', 'toggleFavorite']); + let g_validCmd:Set; + // Obtain the session descriptors and create an AVSessionController object. + AVSessionManager.getAllSessionDescriptors().then((descriptors) => { + descriptors.forEach((descriptor) => { + AVSessionManager.createController(descriptor.sessionId).then((controller) => { + g_controller.push(controller); + }).catch((err) => { + console.error(`createController : ERROR : ${err.message}`); + }); + }); + }).catch((err) => { + console.error(`getAllSessionDescriptors : ERROR : ${err.message}`); + }); + + ``` + +2. Listen for the session state and service state events. + + The following session state events are available: + + - **sessionCreate**: triggered when a session is created. + - **sessionDestroy**: triggered when a session is destroyed. + - **topSessionChange**: triggered when the top session is changed. + + The service state event **sessionServiceDie** is reported when the AVSession service is abnormal. + + ```ts + // Subscribe to the 'sessionCreate' event and create an AVSessionController object. + AVSessionManager.on('sessionCreate', (session) => { + // After an AVSession is added, you must create an AVSessionController object. + AVSessionManager.createController(session.sessionId).then((controller) => { + g_controller.push(controller); + }).catch((err) => { + console.info(`createController : ERROR : ${err.message}`); + }); + }); + + // Subscribe to the 'sessionDestroy' event to enable the application to get notified when the session dies. + AVSessionManager.on('sessionDestroy', (session) => { + let index = g_controller.findIndex((controller) => { + return controller.sessionId === session.sessionId; + }); + if (index !== 0) { + g_controller[index].destroy(); + g_controller.splice(index, 1); + } + }); + // Subscribe to the 'topSessionChange' event. + AVSessionManager.on('topSessionChange', (session) => { + let index = g_controller.findIndex((controller) => { + return controller.sessionId === session.sessionId; + }); + // Place the session on the top. + if (index !== 0) { + g_controller.sort((a, b) => { + return a.sessionId === session.sessionId ? -1 : 0; + }); + } + }); + // Subscribe to the 'sessionServiceDie' event. + AVSessionManager.on('sessionServiceDie', () => { + // The server is abnormal, and the application clears resources. + console.info("Server exception."); + }) + ``` + +3. Subscribe to media information changes and other session events. + + The following media information change events are available: + + - **metadataChange**: triggered when the session metadata changes. + - **playbackStateChange**: triggered when the playback state changes. + - **activeStateChange**: triggered when the activation state of the session changes. + - **validCommandChange**: triggered when the valid commands supported by the session changes. + - **outputDeviceChange**: triggered when the output device changes. + - **sessionDestroy**: triggered when a session is destroyed. + + The controller can listen for events as required. + + ```ts + // Subscribe to the 'activeStateChange' event. + controller.on('activeStateChange', (isActive) => { + if (isActive) { + console.info("The widget corresponding to the controller is highlighted."); + } else { + console.info("The widget corresponding to the controller is invalid."); + } + }); + // Subscribe to the 'sessionDestroy' event to enable the controller to get notified when the session dies. + controller.on('sessionDestroy', () => { + console.info('on sessionDestroy : SUCCESS '); + controller.destroy().then(() => { + console.info('destroy : SUCCESS '); + }).catch((err) => { + console.info(`destroy : ERROR :${err.message}`); + }); + }); + + // Subscribe to metadata changes. + let metaFilter = ['assetId', 'title', 'description']; + controller.on('metadataChange', metaFilter, (metadata) => { + console.info(`on metadataChange assetId : ${metadata.assetId}`); + }); + // Subscribe to playback state changes. + let playbackFilter = ['state', 'speed', 'loopMode']; + controller.on('playbackStateChange', playbackFilter, (playbackState) => { + console.info(`on playbackStateChange state : ${playbackState.state}`); + }); + // Subscribe to supported command changes. + controller.on('validCommandChange', (cmds) => { + console.info(`validCommandChange : SUCCESS : size : ${cmds.size}`); + console.info(`validCommandChange : SUCCESS : cmds : ${cmds.values()}`); + g_validCmd.clear(); + for (let c of g_centerSupportCmd) { + if (cmds.has(c)) { + g_validCmd.add(c); + } + } + }); + // Subscribe to output device changes. + controller.on('outputDeviceChange', (device) => { + console.info(`on outputDeviceChange device isRemote : ${device.isRemote}`); + }); + ``` + +4. Obtain the media information transferred by the provider for display on the UI, for example, displaying the track being played and the playback state in Media Controller. + + ```ts + async getInfoFromSessionByController() { + // It is assumed that an AVSessionController object corresponding to the session already exists. For details about how to create an AVSessionController object, see the code snippet above. + let controller: AVSessionManager.AVSessionController = ALLREADY_HAVE_A_CONTROLLER; + // Obtain the session ID. + let sessionId: string = controller.sessionId; + console.info(`get sessionId by controller : isActive : ${sessionId}`); + // Obtain the activation state of the session. + let isActive: boolean = await controller.isActive(); + console.info(`get activeState by controller : ${isActive}`); + // Obtain the media information of the session. + let metadata: AVSessionManager.AVMetadata = await controller.getAVMetadata(); + console.info(`get media title by controller : ${metadata.title}`); + console.info(`get media artist by controller : ${metadata.artist}`); + // Obtain the playback information of the session. + let avPlaybackState: AVSessionManager.AVPlaybackState = await controller.getAVPlaybackState(); + console.info(`get playbackState by controller : ${avPlaybackState.state}`); + console.info(`get favoriteState by controller : ${avPlaybackState.isFavorite}`); + } + ``` + +5. Control the playback behavior, for example, sending a command to operate (play/pause/previous/next) the item being played in Media Controller. + + After listening for the control command event, the audio and video application serving as the provider needs to implement the corresponding operation. + + + ```ts + async sendCommandToSessionByController() { + // It is assumed that an AVSessionController object corresponding to the session already exists. For details about how to create an AVSessionController object, see the code snippet above. + let controller: AVSessionManager.AVSessionController = ALLREADY_HAVE_A_CONTROLLER; + // Obtain the commands supported by the session. + let validCommandTypeArray: Array = await controller.getValidCommands(); + console.info(`get validCommandArray by controller : length : ${validCommandTypeArray.length}`); + // Deliver the 'play' command. + // If the 'play' command is valid, deliver it. Normal sessions should provide and implement the playback. + if (validCommandTypeArray.indexOf('play') >= 0) { + let avCommand: AVSessionManager.AVControlCommand = {command:'play'}; + controller.sendControlCommand(avCommand); + } + // Deliver the 'pause' command. + if (validCommandTypeArray.indexOf('pause') >= 0) { + let avCommand: AVSessionManager.AVControlCommand = {command:'pause'}; + controller.sendControlCommand(avCommand); + } + // Deliver the 'playPrevious' command. + if (validCommandTypeArray.indexOf('playPrevious') >= 0) { + let avCommand: AVSessionManager.AVControlCommand = {command:'playPrevious'}; + controller.sendControlCommand(avCommand); + } + // Deliver the 'playNext' command. + if (validCommandTypeArray.indexOf('playNext') >= 0) { + let avCommand: AVSessionManager.AVControlCommand = {command:'playNext'}; + controller.sendControlCommand(avCommand); + } + } + ``` + +6. When the audio and video application exits, cancel the listener and release the resources. + + ```ts + async destroyController() { + // It is assumed that an AVSessionController object corresponding to the session already exists. For details about how to create an AVSessionController object, see the code snippet above. + let controller: AVSessionManager.AVSessionController = ALLREADY_HAVE_A_CONTROLLER; + + // Destroy the AVSessionController object. After being destroyed, it is no longer available. + controller.destroy(function (err) { + if (err) { + console.info(`Destroy controller ERROR : code: ${err.code}, message: ${err.message}`); + } else { + console.info('Destroy controller SUCCESS'); + } + }); + } + ``` diff --git a/en/application-dev/media/using-avsession-developer.md b/en/application-dev/media/using-avsession-developer.md new file mode 100644 index 0000000000000000000000000000000000000000..07bd4bf1297f3afc5352d30e9acd674fe056f815 --- /dev/null +++ b/en/application-dev/media/using-avsession-developer.md @@ -0,0 +1,198 @@ +# AVSession Provider + +An audio and video application needs to access the AVSession service as a provider in order to display media information in the controller (for example, Media Controller) and respond to control commands delivered by the controller. + +## Basic Concepts + +- AVMetadata: media data related attributes, including the IDs of the current media asset (assetId), previous media asset (previousAssetId), and next media asset (nextAssetId), title, author, album, writer, and duration. + +- AVPlaybackState: playback state attributes, including the playback state, position, speed, buffered time, loop mode, and whether the media asset is favorited (**isFavorite**). + +## Available APIs + +The table below lists the key APIs used by the provider. The APIs use either a callback or promise to return the result. The APIs listed below use a callback. They provide the same functions as their counterparts that use a promise. + +For details, see [AVSession Management](../reference/apis/js-apis-avsession.md). + +| API| Description| +| -------- | -------- | +| createAVSession(context: Context, tag: string, type: AVSessionType, callback: AsyncCallback<AVSession>): void | Creates an AVSession.
Only one AVSession can be created for a UIAbility.| +| setAVMetadata(data: AVMetadata, callback: AsyncCallback<void>): void | Sets AVSession metadata.| +| setAVPlaybackState(state: AVPlaybackState, callback: AsyncCallback<void>): void | Sets the AVSession playback state.| +| setLaunchAbility(ability: WantAgent, callback: AsyncCallback<void>): void | Starts a UIAbility.| +| getController(callback: AsyncCallback<AVSessionController>): void | Obtains the controller of the AVSession.| +| activate(callback: AsyncCallback<void>): void | Activates the AVSession.| +| destroy(callback: AsyncCallback<void>): void | Destroys the AVSession.| + +## How to Develop + +To enable an audio and video application to access the AVSession service as a provider, proceed as follows: + +1. Call an API in the **AVSessionManager** class to create and activate an **AVSession** object. + + ```ts + import AVSessionManager from '@ohos.multimedia.avsession'; // Import the AVSessionManager module. + + // Create an AVSession object. + async createSession() { + let session: AVSessionManager.AVSession = await AVSessionManager.createAVSession(this.context, 'SESSION_NAME', 'audio'); + session.activate(); + console.info(`session create done : sessionId : ${session.sessionId}`); + } + ``` + +2. Set AVSession information, which includes: + - AVMetadata + - AVPlaybackState + + The controller will call an API in the **AVSessionController** class to obtain the information and display or process the information. + + ```ts + async setSessionInfo() { + // It is assumed that an AVSession object has been created. For details about how to create an AVSession object, see the node snippet above. + let session: AVSessionManager.AVSession = ALLREADY_CREATE_A_SESSION; + // The player logic that triggers changes in the session metadata and playback state is omitted here. + // Set necessary session metadata. + let metadata: AVSessionManager.AVMetadata = { + assetId: "0", + title: "TITLE", + artist: "ARTIST" + }; + session.setAVMetadata(metadata).then(() => { + console.info('SetAVMetadata successfully'); + }).catch((err) => { + console.info(`SetAVMetadata BusinessError: code: ${err.code}, message: ${err.message}`); + }); + // Set the playback state to paused and set isFavorite to false. + let playbackState: AVSessionManager.AVPlaybackState = { + state:AVSessionManager.PlaybackState.PLAYBACK_STATE_PAUSE, + isFavorite:false + }; + session.setAVPlaybackState(playbackState, function (err) { + if (err) { + console.info(`SetAVPlaybackState BusinessError: code: ${err.code}, message: ${err.message}`); + } else { + console.info('SetAVPlaybackState successfully'); + } + }); + } + ``` + +3. Set the UIAbility to be started by the controller. The UIAbility configured here is started when a user operates the UI of the controller, for example, clicking a widget in Media Controller. + The UIAbility is set through the **WantAgent** API. For details, see [WantAgent](../reference/apis/js-apis-app-ability-wantAgent.md). + + ```ts + import WantAgent from "@ohos.app.ability.wantAgent"; + ``` + + ```ts + // It is assumed that an AVSession object has been created. For details about how to create an AVSession object, see the node snippet above. + let session: AVSessionManager.AVSession = ALLREADY_CREATE_A_SESSION; + let wantAgentInfo: { + wants: [ + { + bundleName: "com.example.musicdemo", + abilityName: "com.example.musicdemo.MainAbility" + } + ], + operationType: WantAgent.OperationType.START_ABILITIES, + requestCode: 0, + wantAgentFlags: [WantAgent.WantAgentFlags.UPDATE_PRESENT_FLAG] + } + WantAgent.getWantAgent(wantAgentInfo).then((agent) => { + session.setLaunchAbility(agent) + }) + ``` + +4. Listen for control commands delivered by the controller, for example, Media Controller. + > **NOTE** + > + > After the provider registers a listener for the control command event, the event will be reflected in **getValidCommands()** of the controller. In other words, the controller determines that the command is valid and triggers the corresponding event as required. To ensure that the control commands delivered by the controller can be executed normally, the provider should not use a null implementation for listening. + + ```ts + async setListenerForMesFromController() { + // It is assumed that an AVSession object has been created. For details about how to create an AVSession object, see the node snippet above. + let session: AVSessionManager.AVSession = ALLREADY_CREATE_A_SESSION; + // Generally, logic processing on the player is implemented in the listener. + // After the processing is complete, use the setter to synchronize the playback information. For details, see the code snippet above. + session.on('play', () => { + console.info('on play , do play task'); + + // do some tasks ··· + }); + session.on('pause', () => { + console.info('on pause , do pause task'); + // do some tasks ··· + }); + session.on('stop', () => { + console.info('on stop , do stop task'); + // do some tasks ··· + }); + session.on('playNext', () => { + console.info('on playNext , do playNext task'); + // do some tasks ··· + }); + session.on('playPrevious', () => { + console.info('on playPrevious , do playPrevious task'); + // do some tasks ··· + }); + } + ``` + +5. Obtain an **AVSessionController** object for this **AVSession** object for interaction. + + ```ts + async createControllerFromSession() { + // It is assumed that an AVSession object has been created. For details about how to create an AVSession object, see the node snippet above. + let session: AVSessionManager.AVSession = ALLREADY_CREATE_A_SESSION; + + // Obtain an AVSessionController object for this AVSession object. + let controller: AVSessionManager.AVSessionController = await session.getController(); + + // The AVSessionController object can interact with the AVSession object, for example, by delivering a control command. + let avCommand: AVSessionManager.AVControlCommand = {command:'play'}; + controller.sendControlCommand(avCommand); + + // Alternatively, listen for state changes. + controller.on('playbackStateChange', 'all', (state: AVSessionManager.AVPlaybackState) => { + + // do some things + }); + + // The AVSessionController object can perform many operations. For details, see the description of the controller. + } + ``` + +6. When the audio and video application exits and does not need to continue playback, cancel the listener and destroy the **AVSession** object. + The code snippet below is used for canceling the listener for control commands: + + ```ts + async unregisterSessionListener() { + // It is assumed that an AVSession object has been created. For details about how to create an AVSession object, see the node snippet above. + let session: AVSessionManager.AVSession = ALLREADY_CREATE_A_SESSION; + + // Cancel the listener of the AVSession object. + session.off('play'); + session.off('pause'); + session.off('stop'); + session.off('playNext'); + session.off('playPrevious'); + } + ``` + + The code snippet below is used for destroying the AVSession object: + + ```ts + async destroySession() { + // It is assumed that an AVSession object has been created. For details about how to create an AVSession object, see the node snippet above. + let session: AVSessionManager.AVSession = ALLREADY_CREATE_A_SESSION; + // Destroy the AVSession object. + session.destroy(function (err) { + if (err) { + console.info(`Destroy BusinessError: code: ${err.code}, message: ${err.message}`); + } else { + console.info('Destroy : SUCCESS '); + } + }); + } + ``` diff --git a/en/application-dev/media/using-distributed-avsession.md b/en/application-dev/media/using-distributed-avsession.md new file mode 100644 index 0000000000000000000000000000000000000000..c1835d661fdd2b57b7dce0f2507dbea748eaea7e --- /dev/null +++ b/en/application-dev/media/using-distributed-avsession.md @@ -0,0 +1,55 @@ +# Using Distributed AVSession + +## Basic Concepts + +- Remote AVSession: an AVSession automatically created on the remote device by the AVSession service for synchronization with an AVSession on the local device. + +- Remote AVSessionController: AVSessionController automatically created on the remote device after projection. + +## Available APIs + +The table below describes the key APIs used for remote projection with the distributed AVSession. The APIs use either a callback or promise to return the result. The APIs listed below use a callback. They provide the same functions as their counterparts that use a promise. + +For details, see [AVSession Management](../reference/apis/js-apis-avsession.md). + +| API| Description| +| -------- | -------- | +| castAudio(session: SessionToken \| 'all', audioDevices: Array<audio.AudioDeviceDescriptor>, callback: AsyncCallback<void>): void | Casts a session to a list of devices.| + +## How to Develop + +To enable a system application that accesses the AVSession service as the controller to use the distributed AVSession for projection, proceed as follows: + +1. Import the modules. Before projection, you must obtain the AudioDeviceDescriptor from the audio module. Therefore, import the audio module and AVSessionManager module. + + ```ts + import AVSessionManager from '@ohos.multimedia.avsession'; + import audio from '@ohos.multimedia.audio'; + ``` + +2. Use **castAudio** in the **AVSessionManager** class to project all sessions of the local device to another device. + + ```ts + // Cast the sessions to another device. + let audioManager = audio.getAudioManager(); + let audioRoutingManager = audioManager.getRoutingManager(); + let audioDevices; + await audioRoutingManager.getDevices(audio.DeviceFlag.OUTPUT_DEVICES_FLAG).then((data) => { + audioDevices = data; + console.info('Promise returned to indicate that the device list is obtained.'); + }).catch((err) => { + console.info(`getDevices : ERROR : ${err.message}`); + }); + + AVSessionManager.castAudio('all', audioDevices).then(() => { + console.info('createController : SUCCESS'); + }).catch((err) => { + console.info(`createController : ERROR : ${err.message}`); + }); + ``` + + After the system application on the local service initiates projection to a remote device, the AVSession framework instructs the AVSession service of the remote device to create a remote AVSession. When the AVSession on the local device changes (for example, the media information or playback state changes), the AVSession framework automatically synchronizes the change to the remote device. + + The AVSession processing mechanism on the remote device is consistent with that on the local device. That is, the controller (for example, the Media Controller) on the remote device listens for the AVSession creation event, and creates a remote **AVSessionController** object to manage the remote AVSession. In addition, the control commands are automatically synchronized by the AVSession framework to the local device. + + The provider (for example, an audio and video application) on the local device listens for control command events, so as to respond to the commands from the remote device in time. diff --git a/en/application-dev/media/using-opensl-es-for-playback.md b/en/application-dev/media/using-opensl-es-for-playback.md new file mode 100644 index 0000000000000000000000000000000000000000..c5dedbba659154a1893a471e5e9a3d33d33be20a --- /dev/null +++ b/en/application-dev/media/using-opensl-es-for-playback.md @@ -0,0 +1,131 @@ +# Using OpenSL ES for Audio Playback + +OpenSL ES, short for Open Sound Library for Embedded Systems, is an embedded, cross-platform audio processing library that is free of charge. It provides high-performance and low-latency APIs for you to develop applications running on embedded mobile multimedia devices. OpenHarmony have implemented certain native APIs based on [OpenSL ES](https://www.khronos.org/opensles/) 1.0.1 API specifications developed by the [Khronos Group](https://www.khronos.org/). You can use these APIs through and . + +## OpenSL ES on OpenHarmony + +Currently, OpenHarmony implements parts of [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) to implement basic audio playback functionalities. + +If an API that has not been implemented on OpenHarmony is called, **SL_RESULT_FEATURE_UNSUPPORTED** is returned. + +The following lists the OpenSL ES APIs that have been implemented on OpenHarmony. For details, see the [OpenSL ES](https://www.khronos.org/opensles/) specifications. + +- **Engine APIs implemented on OpenHarmony** + - SLresult (\*CreateAudioPlayer) (SLEngineItf self, SLObjectItf \* pPlayer, SLDataSource \*pAudioSrc, SLDataSink \*pAudioSnk, SLuint32 numInterfaces, const SLInterfaceID \* pInterfaceIds, const SLboolean \* pInterfaceRequired) + - SLresult (\*CreateAudioRecorder) (SLEngineItf self, SLObjectItf \* pRecorder, SLDataSource \*pAudioSrc, SLDataSink \*pAudioSnk, SLuint32 numInterfaces, const SLInterfaceID \* pInterfaceIds, const SLboolean \* pInterfaceRequired) + - SLresult (\*CreateOutputMix) (SLEngineItf self, SLObjectItf \* pMix, SLuint32 numInterfaces, const SLInterfaceID \* pInterfaceIds, const SLboolean \* pInterfaceRequired) + +- **Object APIs implemented on OpenHarmony** + - SLresult (\*Realize) (SLObjectItf self, SLboolean async) + - SLresult (\*GetState) (SLObjectItf self, SLuint32 \* pState) + - SLresult (\*GetInterface) (SLObjectItf self, const SLInterfaceID iid, void \* pInterface) + - void (\*Destroy) (SLObjectItf self) + +- **Playback APIs implemented on OpenHarmony** + - SLresult (\*SetPlayState) (SLPlayItf self, SLuint32 state) + - SLresult (\*GetPlayState) (SLPlayItf self, SLuint32 \*pState) + +- **Volume control APIs implemented on OpenHarmony** + - SLresult (\*SetVolumeLevel) (SLVolumeItf self, SLmillibel level) + - SLresult (\*GetVolumeLevel) (SLVolumeItf self, SLmillibel \*pLevel) + - SLresult (\*GetMaxVolumeLevel) (SLVolumeItf self, SLmillibel \*pMaxLevel) + +- **BufferQueue APIs implemented on OpenHarmony** + + The APIs listed below can be used only after is introduced. + | API| Description| + | -------- | -------- | + | SLresult (\*Enqueue) (SLOHBufferQueueItf self, const void \*buffer, SLuint32 size) | Adds a buffer to the corresponding queue.
For an audio playback operation, this API adds the buffer with audio data to the **filledBufferQ_** queue. For an audio recording operation, this API adds the idle buffer after recording data storage to the **freeBufferQ_** queue.
The **self** parameter indicates the **BufferQueue** object that calls this API.
The **buffer** parameter indicates the pointer to the buffer with audio data or the pointer to the idle buffer after the recording data is stored.
The **size** parameter indicates the size of the buffer.| + | SLresult (\*Clear) (SLOHBufferQueueItf self) | Releases a **BufferQueue** object.
The **self** parameter indicates the **BufferQueue** object that calls this API.| + | SLresult (\*GetState) (SLOHBufferQueueItf self, SLOHBufferQueueState \*state) | Obtains the state of a **BufferQueue** object.
The **self** parameter indicates the **BufferQueue** object that calls this API.
The **state** parameter indicates the pointer to the state of the **BufferQueue** object.| + | SLresult (\*RegisterCallback) (SLOHBufferQueueItf self, SlOHBufferQueueCallback callback, void\* pContext) | Registers a callback.
The **self** parameter indicates the **BufferQueue** object that calls this API.
The **callback** parameter indicates the callback to be registered for the audio playback or recording operation.
The **pContext** parameter indicates the pointer to the audio file to be played for an audio playback operation or the pointer to the audio file to be recorded for an audio recording operation.| + | SLresult (\*GetBuffer) (SLOHBufferQueueItf self, SLuint8\*\* buffer, SLuint32\* size) | Obtains a buffer.
For an audio playback operation, this API obtains an idle buffer from the **freeBufferQ_** queue. For an audio recording operation, this API obtains the buffer that carries recording data from the **filledBufferQ_** queue.
The **self** parameter indicates the **BufferQueue** object that calls this API.
The **buffer** parameter indicates the double pointer to the idle buffer or the buffer carrying recording data.
The **size** parameter indicates the size of the buffer.| + +## Sample Code + +Refer to the sample code below to play an audio file. + +1. Add the header files. + + ```c++ + #include + #include + #include + ``` + +2. Use the **slCreateEngine** API to obtain an **engine** instance. + + ```c++ + SLObjectItf engineObject = nullptr; + slCreateEngine(&engineObject, 0, nullptr, 0, nullptr, nullptr); + (*engineObject)->Realize(engineObject, SL_BOOLEAN_FALSE); + ``` + +3. Obtain the **engineEngine** instance of the **SL_IID_ENGINE** API. + + ```c++ + SLEngineItf engineEngine = nullptr; + (*engineObject)->GetInterface(engineObject, SL_IID_ENGINE, &engineEngine); + ``` + +4. Configure the player and create an **AudioPlayer** instance. + + ```c++ + SLDataLocator_BufferQueue slBufferQueue = { + SL_DATALOCATOR_BUFFERQUEUE, + 0 + }; + + // Configure the parameters based on the audio file format. + SLDataFormat_PCM pcmFormat = { + SL_DATAFORMAT_PCM, + 2, // Number of channels. + SL_SAMPLINGRATE_48, // Sampling rate. + SL_PCMSAMPLEFORMAT_FIXED_16, // Audio sample format. + 0, + 0, + 0 + }; + SLDataSource slSource = {&slBufferQueue, &pcmFormat}; + SLObjectItf pcmPlayerObject = nullptr; + (*engineEngine)->CreateAudioPlayer(engineEngine, &pcmPlayerObject, &slSource, null, 0, nullptr, nullptr); + (*pcmPlayerObject)->Realize(pcmPlayerObject, SL_BOOLEAN_FALSE); + ``` + +5. Obtain the **bufferQueueItf** instance of the **SL_IID_OH_BUFFERQUEUE** API. + + ```c++ + SLOHBufferQueueItf bufferQueueItf; + (*pcmPlayerObject)->GetInterface(pcmPlayerObject, SL_IID_OH_BUFFERQUEUE, &bufferQueueItf); + ``` + +6. Open an audio file and register the **BufferQueueCallback** function. + + ```c++ + static void BufferQueueCallback (SLOHBufferQueueItf bufferQueueItf, void *pContext, SLuint32 size) + { + SLuint8 *buffer = nullptr; + SLuint32 pSize; + (*bufferQueueItf)->GetBuffer(bufferQueueItf, &buffer, &pSize); + // Write the audio data to be played to the buffer. + (*bufferQueueItf)->Enqueue(bufferQueueItf, buffer, size); + } + void *pContext; // This callback can be used to obtain the custom context information passed in. + (*bufferQueueItf)->RegisterCallback(bufferQueueItf, BufferQueueCallback, pContext); + ``` + +7. Obtain the **playItf** instance of the **SL_PLAYSTATE_PLAYING** API and start playing. + + ```c++ + SLPlayItf playItf = nullptr; + (*pcmPlayerObject)->GetInterface(pcmPlayerObject, SL_IID_PLAY, &playItf); + (*playItf)->SetPlayState(playItf, SL_PLAYSTATE_PLAYING); + ``` + +8. Stop playing. + + ```c++ + (*playItf)->SetPlayState(playItf, SL_PLAYSTATE_STOPPED); + (*pcmPlayerObject)->Destroy(pcmPlayerObject); + (*engineObject)->Destroy(engineObject); + ``` diff --git a/en/application-dev/media/using-opensl-es-for-recording.md b/en/application-dev/media/using-opensl-es-for-recording.md new file mode 100644 index 0000000000000000000000000000000000000000..55a18fc561c0117d5aff5aaedb22c36f1b7706bf --- /dev/null +++ b/en/application-dev/media/using-opensl-es-for-recording.md @@ -0,0 +1,148 @@ +# Using OpenSL ES for Audio Recording + +OpenSL ES, short for Open Sound Library for Embedded Systems, is an embedded, cross-platform audio processing library that is free of charge. It provides high-performance and low-latency APIs for you to develop applications running on embedded mobile multimedia devices. OpenHarmony have implemented certain native APIs based on [OpenSL ES](https://www.khronos.org/opensles/) 1.0.1 API specifications developed by the [Khronos Group](https://www.khronos.org/). You can use these APIs through and . + +## OpenSL ES on OpenHarmony + +Currently, OpenHarmony implements parts of [OpenSL ES APIs](https://gitee.com/openharmony/third_party_opensles/blob/master/api/1.0.1/OpenSLES.h) to implement basic audio recording functionalities. + +If an API that has not been implemented on OpenHarmony is called, **SL_RESULT_FEATURE_UNSUPPORTED** is returned. + +The following lists the OpenSL ES APIs that have been implemented on OpenHarmony. For details, see the [OpenSL ES](https://www.khronos.org/opensles/) specifications. + +- **Engine APIs implemented on OpenHarmony** + - SLresult (\*CreateAudioPlayer) (SLEngineItf self, SLObjectItf \* pPlayer, SLDataSource \*pAudioSrc, SLDataSink \*pAudioSnk, SLuint32 numInterfaces, const SLInterfaceID \* pInterfaceIds, const SLboolean \* pInterfaceRequired) + - SLresult (\*CreateAudioRecorder) (SLEngineItf self, SLObjectItf \* pRecorder, SLDataSource \*pAudioSrc, SLDataSink \*pAudioSnk, SLuint32 numInterfaces, const SLInterfaceID \* pInterfaceIds, const SLboolean \* pInterfaceRequired) + - SLresult (\*CreateOutputMix) (SLEngineItf self, SLObjectItf \* pMix, SLuint32 numInterfaces, const SLInterfaceID \* pInterfaceIds, const SLboolean \* pInterfaceRequired) + +- **Object APIs implemented on OpenHarmony** + - SLresult (\*Realize) (SLObjectItf self, SLboolean async) + - SLresult (\*GetState) (SLObjectItf self, SLuint32 \* pState) + - SLresult (\*GetInterface) (SLObjectItf self, const SLInterfaceID iid, void \* pInterface) + - void (\*Destroy) (SLObjectItf self) + +- **Recorder APIs implemented on OpenHarmony** + - SLresult (\*SetRecordState) (SLRecordItf self, SLuint32 state) + - SLresult (\*GetRecordState) (SLRecordItf self,SLuint32 \*pState) + +- **BufferQueue APIs implemented on OpenHarmony** + + The APIs listed below can be used only after is introduced. + | API| Description| + | -------- | -------- | + | SLresult (\*Enqueue) (SLOHBufferQueueItf self, const void \*buffer, SLuint32 size) | Adds a buffer to the corresponding queue.
For an audio playback operation, this API adds the buffer with audio data to the **filledBufferQ_** queue. For an audio recording operation, this API adds the idle buffer after recording data storage to the **freeBufferQ_** queue.
The **self** parameter indicates the **BufferQueue** object that calls this API.
The **buffer** parameter indicates the pointer to the buffer with audio data or the pointer to the idle buffer after the recording data is stored.
The **size** parameter indicates the size of the buffer.| + | SLresult (\*Clear) (SLOHBufferQueueItf self) | Releases a **BufferQueue** object.
The **self** parameter indicates the **BufferQueue** object that calls this API.| + | SLresult (\*GetState) (SLOHBufferQueueItf self, SLOHBufferQueueState \*state) | Obtains the state of a **BufferQueue** object.
The **self** parameter indicates the **BufferQueue** object that calls this API.
The **state** parameter indicates the pointer to the state of the **BufferQueue** object.| + | SLresult (\*RegisterCallback) (SLOHBufferQueueItf self, SlOHBufferQueueCallback callback, void\* pContext) | Registers a callback.
The **self** parameter indicates the **BufferQueue** object that calls this API.
The **callback** parameter indicates the callback to be registered for the audio playback or recording operation.
The **pContext** parameter indicates the pointer to the audio file to be played for an audio playback operation or the pointer to the audio file to be recorded for an audio recording operation.| + | SLresult (\*GetBuffer) (SLOHBufferQueueItf self, SLuint8\*\* buffer, SLuint32\* size) | Obtains a buffer.
For an audio playback operation, this API obtains an idle buffer from the **freeBufferQ_** queue. For an audio recording operation, this API obtains the buffer that carries recording data from the **filledBufferQ_** queue.
The **self** parameter indicates the **BufferQueue** object that calls this API.
The **buffer** parameter indicates the double pointer to the idle buffer or the buffer carrying recording data.
The **size** parameter indicates the size of the buffer.| + +## Sample Code + +Refer to the sample code below to record an audio file. + +1. Add the header files. + + ```c++ + #include + #include + #include + ``` + +2. Use the **slCreateEngine** API to create and instantiate an **engine** object. + + ```c++ + SLObjectItf engineObject = nullptr; + slCreateEngine(&engineObject, 0, nullptr, 0, nullptr, nullptr); + (*engineObject)->Realize(engineObject, SL_BOOLEAN_FALSE); + ``` + +3. Obtain the **engineEngine** instance of the **SL_IID_ENGINE** API. + + ```c++ + SLEngineItf engineItf = nullptr; + (*engineObject)->GetInterface(engineObject, SL_IID_ENGINE, &engineItf); + ``` + +4. Configure the recorder information (including the input source **audiosource** and output source **audiosink**), and create a **pcmCapturerObject** instance. + + ```c++ + SLDataLocator_IODevice io_device = { + SL_DATALOCATOR_IODEVICE, + SL_IODEVICE_AUDIOINPUT, + SL_DEFAULTDEVICEID_AUDIOINPUT, + NULL + }; + SLDataSource audioSource = { + &io_device, + NULL + }; + SLDataLocator_BufferQueue buffer_queue = { + SL_DATALOCATOR_BUFFERQUEUE, + 3 + }; + // Configure the parameters based on the audio file format. + SLDataFormat_PCM format_pcm = { + SL_DATAFORMAT_PCM, // Input audio format. + 1, // Mono channel. + SL_SAMPLINGRATE_44_1, // Sampling rate, 44100 Hz. + SL_PCMSAMPLEFORMAT_FIXED_16, // Audio sampling format, a signed 16-bit integer in little-endian format. + 0, + 0, + 0 + }; + SLDataSink audioSink = { + &buffer_queue, + &format_pcm + }; + + SLObjectItf pcmCapturerObject = nullptr; + (*engineItf)->CreateAudioRecorder(engineItf, &pcmCapturerObject, + &audioSource, &audioSink, 0, nullptr, nullptr); + (*pcmCapturerObject)->Realize(pcmCapturerObject, SL_BOOLEAN_FALSE); + + ``` + +5. Obtain the **recordItf** instance of the **SL_IID_RECORD** API. + + ```c++ + SLRecordItf recordItf; + (*pcmCapturerObject)->GetInterface(pcmCapturerObject, SL_IID_RECORD, &recordItf); + ``` + +6. Obtain the **bufferQueueItf** instance of the **SL_IID_OH_BUFFERQUEUE** API. + + ```c++ + SLOHBufferQueueItf bufferQueueItf; + (*pcmCapturerObject)->GetInterface(pcmCapturerObject, SL_IID_OH_BUFFERQUEUE, &bufferQueueItf); + ``` + +7. Register the **BufferQueueCallback** function. + + ```c++ + static void BufferQueueCallback(SLOHBufferQueueItf bufferQueueItf, void *pContext, SLuint32 size) + { + // Obtain the user information passed in during the registration from pContext. + SLuint8 *buffer = nullptr; + SLuint32 pSize = 0; + (*bufferQueueItf)->GetBuffer(bufferQueueItf, &buffer, &pSize); + if (buffer != nullptr) { + // The recording data can be read from the buffer for subsequent processing. + (*bufferQueueItf)->Enqueue(bufferQueueItf, buffer, size); + } + } + void *pContext; // This callback can be used to obtain the custom context information passed in. + (*bufferQueueItf)->RegisterCallback(bufferQueueItf, BufferQueueCallback, pContext); + ``` + +8. Start audio recording. + + ```c++ + (*recordItf)->SetRecordState(recordItf, SL_RECORDSTATE_RECORDING); + ``` + +9. Stop audio recording. + + ```c++ + (*recordItf)->SetRecordState(recordItf, SL_RECORDSTATE_STOPPED); + (*pcmCapturerObject)->Destroy(pcmCapturerObject); + ``` diff --git a/en/application-dev/media/using-toneplayer-for-playback.md b/en/application-dev/media/using-toneplayer-for-playback.md new file mode 100644 index 0000000000000000000000000000000000000000..11a528786b5bae712d8c4f07b9cad4ee29af2f48 --- /dev/null +++ b/en/application-dev/media/using-toneplayer-for-playback.md @@ -0,0 +1,140 @@ +# Using TonePlayer for Audio Playback (for System Applications Only) + +TonePlayer9+ provides APIs for playing and managing Dual Tone Multi Frequency (DTMF) tones, such as dial tones, ringback tones, supervisory tones, and proprietary tones. The main task of the TonePlayer is to generate sine waves of different frequencies by using the built-in algorithm based on the [ToneType](../reference/apis/js-apis-audio.md#tonetype9)s and add the sine waves to create a sound. The sound can then be played by using the [AudioRenderer](../reference/apis/js-apis-audio.md#audiorenderer8), and the playback task can also be managed by using the [AudioRenderer](../reference/apis/js-apis-audio.md#audiorenderer8). The full process includes loading the DTMF tone configuration, starting DTMF tone playing, stopping the playback, and releasing the resources associated with the **TonePlayer** object. For details about the APIs, see the [TonePlayer API Reference](../reference/apis/js-apis-audio.md#toneplayer9). + + +## Supported Tone Types + +The table below lists the supported [ToneType](../reference/apis/js-apis-audio.md#tonetype9)s. You can call **load()** with **audio.ToneType.*type*** as a parameter to load the tone resource of the specified type. + +| Tone Type| Value| Description| +| -------- | -------- | -------- | +| TONE_TYPE_DIAL_0 | 0 | DTMF tone of key 0.| +| TONE_TYPE_DIAL_1 | 1 | DTMF tone of key 1.| +| TONE_TYPE_DIAL_2 | 2 | DTMF tone of key 2.| +| TONE_TYPE_DIAL_3 | 3 | DTMF tone of key 3.| +| TONE_TYPE_DIAL_4 | 4 | DTMF tone of key 4.| +| TONE_TYPE_DIAL_5 | 5 | DTMF tone of key 5.| +| TONE_TYPE_DIAL_6 | 6 | DTMF tone of key 6.| +| TONE_TYPE_DIAL_7 | 7 | DTMF tone of key 7.| +| TONE_TYPE_DIAL_8 | 8 | DTMF tone of key 8.| +| TONE_TYPE_DIAL_9 | 9 | DTMF tone of key 9.| +| TONE_TYPE_DIAL_S | 10 | DTMF tone of the star key (*).| +| TONE_TYPE_DIAL_P | 11 | DTMF tone of the pound key (#).| +| TONE_TYPE_DIAL_A | 12 | DTMF tone of key A.| +| TONE_TYPE_DIAL_B | 13 | DTMF tone of key B.| +| TONE_TYPE_DIAL_C | 14 | DTMF tone of key C.| +| TONE_TYPE_DIAL_D | 15 | DTMF tone of key D.| +| TONE_TYPE_COMMON_SUPERVISORY_DIAL | 100 | Supervisory tone - dial tone.| +| TONE_TYPE_COMMON_SUPERVISORY_BUSY | 101 | Supervisory tone - busy.| +| TONE_TYPE_COMMON_SUPERVISORY_CONGESTION | 102 | Supervisory tone - congestion.| +| TONE_TYPE_COMMON_SUPERVISORY_RADIO_ACK | 103 | Supervisory tone - radio path acknowledgment.| +| TONE_TYPE_COMMON_SUPERVISORY_RADIO_NOT_AVAILABLE | 104 | Supervisory tone - radio path not available.| +| TONE_TYPE_COMMON_SUPERVISORY_CALL_WAITING | 106 | Supervisory tone - call waiting tone.| +| TONE_TYPE_COMMON_SUPERVISORY_RINGTONE | 107 | Supervisory tone - ringing tone.| +| TONE_TYPE_COMMON_PROPRIETARY_BEEP | 200 | Proprietary tone - beep tone.| +| TONE_TYPE_COMMON_PROPRIETARY_ACK | 201 | Proprietary tone - ACK.| +| TONE_TYPE_COMMON_PROPRIETARY_PROMPT | 203 | Proprietary tone - PROMPT.| +| TONE_TYPE_COMMON_PROPRIETARY_DOUBLE_BEEP | 204 | Proprietary tone - double beep tone.| + + +## How to Develop + +To implement audio playback with the TonePlayer, perform the following steps: + +1. Create a **TonePlayer** instance. + + ```ts + import audio from '@ohos.multimedia.audio'; + let audioRendererInfo = { + content : audio.ContentType.CONTENT_TYPE_SONIFICATION, + usage : audio.StreamUsage.STREAM_USAGE_MEDIA, + rendererFlags : 0 + }; + tonePlayerPromise = audio.createTonePlayer(audioRendererInfo); + ``` + +2. Load the DTMF tone configuration of the specified type. + + ```ts + tonePlayerPromise.load(audio.ToneType.TONE_TYPE_DIAL_0); + ``` + +3. Start DTMF tone playing. + + ```ts + tonePlayerPromise.start(); + ``` + +4. Stop the tone that is being played. + + ```ts + tonePlayerPromise.stop(); + ``` + +5. Release the resources associated with the **TonePlayer** instance. + + ```ts + tonePlayerPromise.release(); + ``` + +If the APIs are not called in the preceding sequence, the error code **6800301 NAPI_ERR_SYSTEM** is returned. + + +## Sample Code + +Refer to the following code to play the DTMF tone when the dial key on the keyboard is pressed. + +To prevent the UI thread from being blocked, most **TonePlayer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions. For more information, see [TonePlayer](../reference/apis/js-apis-audio.md#toneplayer9). + + +```ts +import audio from '@ohos.multimedia.audio'; + +export class TonelayerDemo { + private timer : number; + private timerPro : number; + // Promise mode. + async testTonePlayerPromise(type) { + console.info('testTonePlayerPromise start'); + if (this.timerPro) clearTimeout(this.timerPro); + let tonePlayerPromise; + let audioRendererInfo = { + content : audio.ContentType.CONTENT_TYPE_SONIFICATION, + usage : audio.StreamUsage.STREAM_USAGE_MEDIA, + rendererFlags : 0 + }; + this.timerPro = setTimeout(async () => { + try { + console.info('testTonePlayerPromise: createTonePlayer'); + // Create a DTMF player. + tonePlayerPromise = await audio.createTonePlayer(audioRendererInfo); + console.info('testTonePlayerPromise: createTonePlayer-success'); + console.info(`testTonePlayerPromise: load type: ${type}`); + // Load the tone configuration of the specified type. + await tonePlayerPromise.load(type); + console.info('testTonePlayerPromise: load-success'); + console.info(`testTonePlayerPromise: start type: ${type}`); + // Start DTMF tone playing. + await tonePlayerPromise.start(); + console.info('testTonePlayerPromise: start-success'); + console.info(`testTonePlayerPromise: stop type: ${type}`); + setTimeout(async()=>{ + // Stop the tone that is being played. + await tonePlayerPromise.stop(); + console.info('testTonePlayerPromise: stop-success'); + console.info(`testTonePlayerPromise: release type: ${type}`); + // Release the resources associated with the TonePlayer instance. + await tonePlayerPromise.release(); + console.info('testTonePlayerPromise: release-success'); + }, 30) + } catch(err) { + console.error(`testTonePlayerPromise err : ${err}`); + } + }, 200) + }; + async testTonePlayer() { + this.testTonePlayerPromise(audio.ToneType.TONE_TYPE_DIAL_0); + } +} +``` diff --git a/en/application-dev/media/video-playback.md b/en/application-dev/media/video-playback.md index d4c895b452aa31b28690bd96bd9ef0fac64c4eb4..406fa76817d15e101dfbda7252af3e685517a2b5 100644 --- a/en/application-dev/media/video-playback.md +++ b/en/application-dev/media/video-playback.md @@ -1,419 +1,176 @@ -# Video Playback Development - -## Introduction - -You can use video playback APIs to convert audio data into audible analog signals and play the signals using output devices. You can also manage playback tasks. For example, you can start, suspend, stop playback, release resources, set the volume, seek to a playback position, set the playback speed, and obtain track information. This document describes development for the following video playback scenarios: full-process, normal playback, video switching, and loop playback. - -## Working Principles - -The following figures show the video playback state transition and the interaction with external modules for video playback. - -**Figure 1** Video playback state transition - -![en-us_image_video_state_machine](figures/en-us_image_video_state_machine.png) - -**Figure 2** Interaction with external modules for video playback - -![en-us_image_video_player](figures/en-us_image_video_player.png) - -**NOTE**: When a third-party application calls a JS interface provided by the JS interface layer, the framework layer invokes the audio component through the media service of the native framework to output the audio data decoded by the software to the audio HDI. The graphics subsystem outputs the image data decoded by the codec HDI at the hardware interface layer to the display HDI. In this way, video playback is implemented. - -*Note: Video playback requires hardware capabilities such as display, audio, and codec.* - -1. A third-party application obtains a surface ID from the XComponent. -2. The third-party application transfers the surface ID to the VideoPlayer JS. -3. The media service flushes the frame data to the surface buffer. - -## Compatibility - -Use the mainstream playback formats and resolutions, rather than custom ones to avoid playback failures, frame freezing, and artifacts. The system is not affected by incompatibility issues. If such an issue occurs, you can exit stream playback mode. - -The table below lists the mainstream playback formats and resolutions. - -| Video Container Format| Description | Resolution | -| :----------: | :-----------------------------------------------: | :--------------------------------: | -| mp4 | Video format: H.264/MPEG-2/MPEG-4/H.263; audio format: AAC/MP3| Mainstream resolutions, such as 1080p, 720p, 480p, and 270p| -| mkv | Video format: H.264/MPEG-2/MPEG-4/H.263; audio format: AAC/MP3| Mainstream resolutions, such as 1080p, 720p, 480p, and 270p| -| ts | Video format: H.264/MPEG-2/MPEG-4; audio format: AAC/MP3 | Mainstream resolutions, such as 1080p, 720p, 480p, and 270p| -| webm | Video format: VP8; audio format: VORBIS | Mainstream resolutions, such as 1080p, 720p, 480p, and 270p| - -## How to Develop - -For details about the APIs, see [VideoPlayer in the Media API](../reference/apis/js-apis-media.md#videoplayer8). - -### Full-Process Scenario - -The full video playback process includes creating an instance, setting the URL, setting the surface ID, preparing for video playback, playing video, pausing playback, obtaining track information, seeking to a playback position, setting the volume, setting the playback speed, stopping playback, resetting the playback configuration, and releasing resources. - -For details about the **url** types supported by **VideoPlayer**, see the [url attribute](../reference/apis/js-apis-media.md#videoplayer_attributes). - -For details about how to create an XComponent, see [XComponent](../reference/arkui-ts/ts-basic-components-xcomponent.md). - -```js -import media from '@ohos.multimedia.media' -import fs from '@ohos.file.fs' -export class VideoPlayerDemo { - // Report an error in the case of a function invocation failure. - failureCallback(error) { - console.info(`error happened,error Name is ${error.name}`); - console.info(`error happened,error Code is ${error.code}`); - console.info(`error happened,error Message is ${error.message}`); - } - - // Report an error in the case of a function invocation exception. - catchCallback(error) { - console.info(`catch error happened,error Name is ${error.name}`); - console.info(`catch error happened,error Code is ${error.code}`); - console.info(`catch error happened,error Message is ${error.message}`); - } - - // Used to print the video track information. - printfDescription(obj) { - for (let item in obj) { - let property = obj[item]; - console.info('key is ' + item); - console.info('value is ' + property); - } - } - - async videoPlayerDemo() { - let videoPlayer = undefined; - let surfaceID = 'test' // The surfaceID parameter is used for screen display. Its value is obtained through the XComponent API. For details about the document link, see the method of creating the XComponent. - let fdPath = 'fd://' - // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_AAC.mp4 /data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile" command. - let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile/H264_AAC.mp4'; - let file = await fs.open(path); - fdPath = fdPath + '' + file.fd; - // Call createVideoPlayer to create a VideoPlayer instance. - await media.createVideoPlayer().then((video) => { - if (typeof (video) != 'undefined') { - console.info('createVideoPlayer success!'); - videoPlayer = video; - } else { - console.info('createVideoPlayer fail!'); +# Video Playback + +OpenHarmony provides two solutions for video playback development: + +- [AVPlayer](using-avplayer-for-playback.md) class: provides ArkTS and JS APIs to implement audio and video playback. It also supports parsing streaming media and local assets, decapsulating media assets, decoding video, and rendering video. It is applicable to end-to-end playback of media assets and can be used to play video files in MP4 and MKV formats. + +- component: encapsulates basic video playback capabilities. It can be used to play video files after the data source and basic information are set. However, its scalability is poor. This component is provided by ArkUI. For details about how to use this component for video playback development, see [Video Component](../ui/arkts-common-components-video-player.md). + +In this topic, you will learn how to use the AVPlayer to develop a video playback service that plays a complete video file. If you want the application to continue playing the video in the background or when the screen is off, you must use the [AVSession](avsession-overview.md) and [continuous task](../task-management/continuous-task-dev-guide.md) to prevent the playback from being forcibly interrupted by the system. + +## Development Guidelines + +The full playback process includes creating an **AVPlayer** instance, setting the media asset to play and the window to display the video, setting playback parameters (volume, speed, and scale type), controlling playback (play, pause, seek, and stop), resetting the playback configuration, and releasing the instance. During application development, you can use the **state** attribute of the AVPlayer to obtain the AVPlayer state or call **on('stateChange')** to listen for state changes. If the application performs an operation when the AudioPlayer is not in the given state, the system may throw an exception or generate other undefined behavior. + +**Figure 1** Playback state transition + +![Playback state change](figures/video-playback-status-change.png) + +For details about the state, see [AVPlayerState](../reference/apis/js-apis-media.md#avplayerstate9). When the AVPlayer is in the **prepared**, **playing**, **paused**, or **completed** state, the playback engine is working and a large amount of RAM is occupied. If your application does not need to use the AVPlayer, call **reset()** or **release()** to release the instance. + +### How to Develop + +Read [AVPlayer](../reference/apis/js-apis-media.md#avplayer9) for the API reference. + +1. Call **createAVPlayer()** to create an **AVPlayer** instance. The AVPlayer is the **idle** state. + +2. Set the events to listen for, which will be used in the full-process scenario. The table below lists the supported events. + | Event Type| Description| + | -------- | -------- | + | stateChange | Mandatory; used to listen for changes of the **state** attribute of the AVPlayer.| + | error | Mandatory; used to listen for AVPlayer errors.| + | durationUpdate | Used to listen for progress bar updates to refresh the media asset duration.| + | timeUpdate | Used to listen for the current position of the progress bar to refresh the current time.| + | seekDone | Used to listen for the completion status of the **seek()** request.
This event is reported when the AVPlayer seeks to the playback position specified in **seek()**.| + | speedDone | Used to listen for the completion status of the **setSpeed()** request.
This event is reported when the AVPlayer plays video at the speed specified in **setSpeed()**.| + | volumeChange | Used to listen for the completion status of the **setVolume()** request.
This event is reported when the AVPlayer plays video at the volume specified in **setVolume()**.| + | bitrateDone | Used to listen for the completion status of the **setBitrate()** request, which is used for HTTP Live Streaming (HLS) streams.
This event is reported when the AVPlayer plays video at the bit rate specified in **setBitrate()**.| + | availableBitrates | Used to listen for available bit rates of HLS resources. The available bit rates are provided for **setBitrate()**.| + | bufferingUpdate | Used to listen for network playback buffer information.| + | startRenderFrame | Used to listen for the rendering time of the first frame during video playback.| + | videoSizeChange | Used to listen for the width and height of video playback and adjust the window size and ratio.| + | audioInterrupt | Used to listen for audio interruption. This event is used together with the **audioInterruptMode** attribute.
This event is reported when the current audio playback is interrupted by another (for example, when a call is coming), so the application can process the event in time.| + +3. Set the media asset URL. The AVPlayer enters the **initialized** state. + > **NOTE** + > + > The URL in the code snippet below is for reference only. You need to check the media asset validity and set the URL based on service requirements. + > + > - If local files are used for playback, ensure that the files are available and the application sandbox path is used for access. For details about how to obtain the application sandbox path, see [Obtaining the Application Development Path](../application-models/application-context-stage.md#obtaining-the-application-development-path). For details about the application sandbox and how to push files to the application sandbox, see [File Management](../file-management/app-sandbox-directory.md). + > + > - If a network playback path is used, you must request the ohos.permission.INTERNET [permission](../security/accesstoken-guidelines.md). + > + > - You can also use **ResourceManager.getRawFd** to obtain the file descriptor of a file packed in the HAP file. For details, see [ResourceManager API Reference](../reference/apis/js-apis-resource-manager.md#getrawfd9). + > + > - The [playback formats and protocols](avplayer-avrecorder-overview.md#supported-formats-and-protocols) in use must be those supported by the system. + +4. Obtain and set the surface ID of the window to display the video. + The application obtains the surface ID from the XComponent. For details about the process, see [XComponent](../reference/arkui-ts/ts-basic-components-xcomponent.md). + +5. Call **prepare()** to switch the AVPlayer to the **prepared** state. In this state, you can obtain the duration of the media asset to play and set the scale type and volume. + +6. Call **play()**, **pause()**, **seek()**, and **stop()** to perform video playback control as required. + +7. (Optional) Call **reset()** to reset the AVPlayer. The AVPlayer enters the **idle** state again and you can change the media asset URL. + +8. Call **release()** to switch the AVPlayer to the **released** state. Now your application exits the playback. + + +### Sample Code + + +```ts +import media from '@ohos.multimedia.media'; +import fs from '@ohos.file.fs'; +import common from '@ohos.app.ability.common'; + +export class AVPlayerDemo { + private avPlayer; + private count: number = 0; + private surfaceID: string; // The surfaceID parameter specifies the window used to display the video. Its value is obtained through the XComponent. + + // Set AVPlayer callback functions. + setAVPlayerCallback() { + // Callback function for the seek operation. + this.avPlayer.on('seekDone', (seekDoneTime) => { + console.info(`AVPlayer seek succeeded, seek time is ${seekDoneTime}`); + }) + // Callback function for errors. If an error occurs during the operation on the AVPlayer, reset() is called to reset the AVPlayer. + this.avPlayer.on('error', (err) => { + console.error(`Invoke avPlayer failed, code is ${err.code}, message is ${err.message}`); + this.avPlayer.reset(); // Call reset() to reset the AVPlayer, which enters the idle state. + }) + // Callback function for state changes. + this.avPlayer.on('stateChange', async (state, reason) => { + switch (state) { + case 'idle': // This state is reported upon a successful callback of reset(). + console.info('AVPlayer state idle called.'); + this.avPlayer.release(); // Call release() to release the instance. + break; + case 'initialized': // This state is reported when the AVPlayer sets the playback source. + console.info('AVPlayerstate initialized called.'); + this.avPlayer.surfaceId = this.surfaceID // Set the window to display the video. This setting is not required when a pure audio asset is to be played. + this.avPlayer.prepare().then(() => { + console.info('AVPlayer prepare succeeded.'); + }, (err) => { + console.error(`Invoke prepare failed, code is ${err.code}, message is ${err.message}`); + }); + break; + case 'prepared': // This state is reported upon a successful callback of prepare(). + console.info('AVPlayer state prepared called.'); + this.avPlayer.play(); // Call play() to start playback. + break; + case 'playing': // This state is reported upon a successful callback of play(). + console.info('AVPlayer state playing called.'); + if (this.count !== 0) { + console.info('AVPlayer start to seek.'); + this.avPlayer.seek (this.avPlayer.duration); // Call seek() to seek to the end of the video clip. + } else { + this.avPlayer.pause(); // Call pause() to pause the playback. + } + this.count++; + break; + case 'paused': // This state is reported upon a successful callback of pause(). + console.info('AVPlayer state paused called.'); + this.avPlayer.play(); // Call play() again to start playback. + break; + case 'completed': // This state is reported upon the completion of the playback. + console.info('AVPlayer state completed called.'); + this.avPlayer.stop(); // Call stop() to stop the playback. + break; + case 'stopped': // This state is reported upon a successful callback of stop(). + console.info('AVPlayer state stopped called.'); + this.avPlayer.reset(); // Call reset() to reset the AVPlayer state. + break; + case 'released': + console.info('AVPlayer state released called.'); + break; + default: + console.info('AVPlayer state unknown called.'); + break; } - }, this.failureCallback).catch(this.catchCallback); - // Set the playback source for the player. - videoPlayer.url = fdPath; - - // Set the surface ID to display the video image. - await videoPlayer.setDisplaySurface(surfaceID).then(() => { - console.info('setDisplaySurface success'); - }, this.failureCallback).catch(this.catchCallback); - - // Call the prepare API to prepare for playback. - await videoPlayer.prepare().then(() => { - console.info('prepare success'); - }, this.failureCallback).catch(this.catchCallback); - - // Call the play API to start playback. - await videoPlayer.play().then(() => { - console.info('play success'); - }, this.failureCallback).catch(this.catchCallback); - - // Pause playback. - await videoPlayer.pause().then(() => { - console.info('pause success'); - }, this.failureCallback).catch(this.catchCallback); - - // Use a promise to obtain the video track information communication_dsoftbus. - let arrayDescription; - await videoPlayer.getTrackDescription().then((arrlist) => { - if (typeof (arrlist) != 'undefined') { - arrayDescription = arrlist; - } else { - console.log('video getTrackDescription fail'); - } - }, this.failureCallback).catch(this.catchCallback); - - for (let i = 0; i < arrayDescription.length; i++) { - this.printfDescription(arrayDescription[i]); - } - - // Seek to the 50s position. For details about the input parameters, see the API document. - let seekTime = 50000; - await videoPlayer.seek(seekTime, media.SeekMode.SEEK_NEXT_SYNC).then((seekDoneTime) => { - console.info('seek success'); - }, this.failureCallback).catch(this.catchCallback); - - // Set the volume. For details about the input parameters, see the API document. - let volume = 0.5; - await videoPlayer.setVolume(volume).then(() => { - console.info('setVolume success'); - }, this.failureCallback).catch(this.catchCallback); - - // Set the playback speed. For details about the input parameters, see the API document. - let speed = media.PlaybackSpeed.SPEED_FORWARD_2_00_X; - await videoPlayer.setSpeed(speed).then(() => { - console.info('setSpeed success'); - }, this.failureCallback).catch(this.catchCallback); - - // Stop playback. - await videoPlayer.stop().then(() => { - console.info('stop success'); - }, this.failureCallback).catch(this.catchCallback); - - // Reset the playback configuration. - await videoPlayer.reset().then(() => { - console.info('reset success'); - }, this.failureCallback).catch(this.catchCallback); - - // Release playback resources. - await videoPlayer.release().then(() => { - console.info('release success'); - }, this.failureCallback).catch(this.catchCallback); - - // Set the related instances to undefined. - videoPlayer = undefined; - surfaceID = undefined; - } -} -``` - -### Normal Playback Scenario - -```js -import media from '@ohos.multimedia.media' -import fs from '@ohos.file.fs' -export class VideoPlayerDemo { - // Report an error in the case of a function invocation failure. - failureCallback(error) { - console.info(`error happened,error Name is ${error.name}`); - console.info(`error happened,error Code is ${error.code}`); - console.info(`error happened,error Message is ${error.message}`); - } - - // Report an error in the case of a function invocation exception. - catchCallback(error) { - console.info(`catch error happened,error Name is ${error.name}`); - console.info(`catch error happened,error Code is ${error.code}`); - console.info(`catch error happened,error Message is ${error.message}`); - } - - // Used to print the video track information. - printfDescription(obj) { - for (let item in obj) { - let property = obj[item]; - console.info('key is ' + item); - console.info('value is ' + property); - } - } - - async videoPlayerDemo() { - let videoPlayer = undefined; - let surfaceID = 'test' // The surfaceID parameter is used for screen display. Its value is obtained through the XComponent API. For details about the document link, see the method of creating the XComponent. - let fdPath = 'fd://' - // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_AAC.mp4 /data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile" command. - let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile/H264_AAC.mp4'; - let file = await fs.open(path); - fdPath = fdPath + '' + file.fd; - // Call createVideoPlayer to create a VideoPlayer instance. - await media.createVideoPlayer().then((video) => { - if (typeof (video) != 'undefined') { - console.info('createVideoPlayer success!'); - videoPlayer = video; - } else { - console.info('createVideoPlayer fail!'); - } - }, this.failureCallback).catch(this.catchCallback); - // Set the playback source for the player. - videoPlayer.url = fdPath; - - // Set the surface ID to display the video image. - await videoPlayer.setDisplaySurface(surfaceID).then(() => { - console.info('setDisplaySurface success'); - }, this.failureCallback).catch(this.catchCallback); - - // Call the prepare API to prepare for playback. - await videoPlayer.prepare().then(() => { - console.info('prepare success'); - }, this.failureCallback).catch(this.catchCallback); - - // Call the play API to start playback. - await videoPlayer.play().then(() => { - console.info('play success'); - }, this.failureCallback).catch(this.catchCallback); - - // Stop playback. - await videoPlayer.stop().then(() => { - console.info('stop success'); - }, this.failureCallback).catch(this.catchCallback); - - // Release playback resources. - await videoPlayer.release().then(() => { - console.info('release success'); - }, this.failureCallback).catch(this.catchCallback); - - // Set the related instances to undefined. - videoPlayer = undefined; - surfaceID = undefined; - } -} -``` - -### Switching to the Next Video Clip - -```js -import media from '@ohos.multimedia.media' -import fs from '@ohos.file.fs' -export class VideoPlayerDemo { - // Report an error in the case of a function invocation failure. - failureCallback(error) { - console.info(`error happened,error Name is ${error.name}`); - console.info(`error happened,error Code is ${error.code}`); - console.info(`error happened,error Message is ${error.message}`); - } - - // Report an error in the case of a function invocation exception. - catchCallback(error) { - console.info(`catch error happened,error Name is ${error.name}`); - console.info(`catch error happened,error Code is ${error.code}`); - console.info(`catch error happened,error Message is ${error.message}`); - } - - // Used to print the video track information. - printfDescription(obj) { - for (let item in obj) { - let property = obj[item]; - console.info('key is ' + item); - console.info('value is ' + property); - } - } - - async videoPlayerDemo() { - let videoPlayer = undefined; - let surfaceID = 'test' // The surfaceID parameter is used for screen display. Its value is obtained through the XComponent API. For details about the document link, see the method of creating the XComponent. - let fdPath = 'fd://' - // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_AAC.mp4 /data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile" command. - let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile/H264_AAC.mp4'; - let nextPath = '/data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile/MP4_AAC.mp4'; - let file = await fs.open(path); - fdPath = fdPath + '' + file.fd; - // Call createVideoPlayer to create a VideoPlayer instance. - await media.createVideoPlayer().then((video) => { - if (typeof (video) != 'undefined') { - console.info('createVideoPlayer success!'); - videoPlayer = video; - } else { - console.info('createVideoPlayer fail!'); - } - }, this.failureCallback).catch(this.catchCallback); - // Set the playback source for the player. - videoPlayer.url = fdPath; - - // Set the surface ID to display the video image. - await videoPlayer.setDisplaySurface(surfaceID).then(() => { - console.info('setDisplaySurface success'); - }, this.failureCallback).catch(this.catchCallback); - - // Call the prepare API to prepare for playback. - await videoPlayer.prepare().then(() => { - console.info('prepare success'); - }, this.failureCallback).catch(this.catchCallback); - - // Call the play API to start playback. - await videoPlayer.play().then(() => { - console.info('play success'); - }, this.failureCallback).catch(this.catchCallback); - - // Reset the playback configuration. - await videoPlayer.reset().then(() => { - console.info('reset success'); - }, this.failureCallback).catch(this.catchCallback); - - // Obtain the next video FD address. - fdPath = 'fd://' - let nextFile = await fs.open(nextPath); - fdPath = fdPath + '' + nextFile.fd; - // Set the second video playback source. - videoPlayer.url = fdPath; - - // Call the prepare API to prepare for playback. - await videoPlayer.prepare().then(() => { - console.info('prepare success'); - }, this.failureCallback).catch(this.catchCallback); - - // Call the play API to start playback. - await videoPlayer.play().then(() => { - console.info('play success'); - }, this.failureCallback).catch(this.catchCallback); - - // Release playback resources. - await videoPlayer.release().then(() => { - console.info('release success'); - }, this.failureCallback).catch(this.catchCallback); - - // Set the related instances to undefined. - videoPlayer = undefined; - surfaceID = undefined; - } -} -``` - -### Looping a Video Clip - -```js -import media from '@ohos.multimedia.media' -import fs from '@ohos.file.fs' -export class VideoPlayerDemo { - // Report an error in the case of a function invocation failure. - failureCallback(error) { - console.info(`error happened,error Name is ${error.name}`); - console.info(`error happened,error Code is ${error.code}`); - console.info(`error happened,error Message is ${error.message}`); - } - - // Report an error in the case of a function invocation exception. - catchCallback(error) { - console.info(`catch error happened,error Name is ${error.name}`); - console.info(`catch error happened,error Code is ${error.code}`); - console.info(`catch error happened,error Message is ${error.message}`); - } - - // Used to print the video track information. - printfDescription(obj) { - for (let item in obj) { - let property = obj[item]; - console.info('key is ' + item); - console.info('value is ' + property); - } - } - - async videoPlayerDemo() { - let videoPlayer = undefined; - let surfaceID = 'test' // The surfaceID parameter is used for screen display. Its value is obtained through the XComponent API. For details about the document link, see the method of creating the XComponent. - let fdPath = 'fd://' - // The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_AAC.mp4 /data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile" command. - let path = '/data/app/el1/bundle/public/ohos.acts.multimedia.video.videoplayer/ohos.acts.multimedia.video.videoplayer/assets/entry/resources/rawfile/H264_AAC.mp4'; + }) + } + + // The following demo shows how to use the file system to open the sandbox address, obtain the media file address, and play the media file using the URL attribute. + async avPlayerUrlDemo() { + // Create an AVPlayer instance. + this.avPlayer = await media.createAVPlayer(); + // Set a callback function for state changes. + this.setAVPlayerCallback(); + let fdPath = 'fd://'; + let context = getContext(this) as common.UIAbilityContext; + // Obtain the sandbox address filesDir through UIAbilityContext. The stage model is used as an example. + let pathDir = context.filesDir; + let path = pathDir + '/H264_AAC.mp4'; + // Open the corresponding file address to obtain the file descriptor and assign a value to the URL to trigger the reporting of the initialized state. let file = await fs.open(path); fdPath = fdPath + '' + file.fd; - // Call createVideoPlayer to create a VideoPlayer instance. - await media.createVideoPlayer().then((video) => { - if (typeof (video) != 'undefined') { - console.info('createVideoPlayer success!'); - videoPlayer = video; - } else { - console.info('createVideoPlayer fail!'); - } - }, this.failureCallback).catch(this.catchCallback); - // Set the playback source for the player. - videoPlayer.url = fdPath; - - // Set the surface ID to display the video image. - await videoPlayer.setDisplaySurface(surfaceID).then(() => { - console.info('setDisplaySurface success'); - }, this.failureCallback).catch(this.catchCallback); - - // Call the prepare API to prepare for playback. - await videoPlayer.prepare().then(() => { - console.info('prepare success'); - }, this.failureCallback).catch(this.catchCallback); - // Set the loop playback attribute. - videoPlayer.loop = true; - // Call the play API to start loop playback. - await videoPlayer.play().then(() => { - console.info('play success, loop value is ' + videoPlayer.loop); - }, this.failureCallback).catch(this.catchCallback); + this.avPlayer.url = fdPath; + } + + // The following demo shows how to use resourceManager to obtain the media file packed in the HAP file and play the media file by using the fdSrc attribute. + async avPlayerFdSrcDemo() { + // Create an AVPlayer instance. + this.avPlayer = await media.createAVPlayer(); + // Set a callback function for state changes. + this.setAVPlayerCallback(); + // Call getRawFd of the resourceManager member of UIAbilityContext to obtain the media asset URL. + // The return type is {fd,offset,length}, where fd indicates the file descriptor address of the HAP file, offset indicates the media asset offset, and length indicates the duration of the media asset to play. + let context = getContext(this) as common.UIAbilityContext; + let fileDescriptor = await context.resourceManager.getRawFd('H264_AAC.mp4'); + // Assign a value to fdSrc to trigger the reporting of the initialized state. + this.avPlayer.fdSrc = fileDescriptor; } } ``` diff --git a/en/application-dev/media/video-recorder.md b/en/application-dev/media/video-recorder.md deleted file mode 100644 index fd9de91b4bae0591e2a5dc4869455bdd4055943e..0000000000000000000000000000000000000000 --- a/en/application-dev/media/video-recorder.md +++ /dev/null @@ -1,160 +0,0 @@ -# Video Recording Development - -## Introduction - -You can use video recording APIs to capture audio and video signals, encode them, and save them to files. You can start, suspend, resume, and stop recording, and release resources. You can also specify parameters such as the encoding format, encapsulation format, and file path for video recording. - -## Working Principles - -The following figures show the video recording state transition and the interaction with external modules for video recording. - -**Figure 1** Video recording state transition - -![en-us_image_video_recorder_state_machine](figures/en-us_image_video_recorder_state_machine.png) - -**Figure 2** Interaction with external modules for video recording - -![en-us_image_video_recorder_zero](figures/en-us_image_video_recorder_zero.png) - -**NOTE**: When a third-party camera application or system camera calls a JS interface provided by the JS interface layer, the framework layer uses the media service of the native framework to invoke the audio component. Through the audio HDI, the audio component captures audio data, encodes the audio data through software, and saves the encoded audio data to a file. The graphics subsystem captures image data through the video HDI, encodes the image data through the video codec HDI, and saves the encoded image data to a file. In this way, video recording is implemented. - -## Constraints - -Before developing video recording, configure the permissions **ohos.permission.MICROPHONE** and **ohos.permission.CAMERA** for your application. For details about the configuration, see [Permission Application Guide](../security/accesstoken-guidelines.md). - -## How to Develop - -For details about the APIs, see [VideoRecorder in the Media API](../reference/apis/js-apis-media.md#videorecorder9). - -### Full-Process Scenario - -The full video recording process includes creating an instance, setting recording parameters, starting, pausing, resuming, and stopping recording, and releasing resources. - -```js -import media from '@ohos.multimedia.media' -import mediaLibrary from '@ohos.multimedia.mediaLibrary' -export class VideoRecorderDemo { - private testFdNumber; // Used to save the FD address. - // pathName indicates the passed recording file name, for example, 01.mp4. The generated file address is /storage/media/100/local/files/Video/01.mp4. - // To use the media library, declare the following permissions: ohos.permission.MEDIA_LOCATION, ohos.permission.WRITE_MEDIA, and ohos.permission.READ_MEDIA. - async getFd(pathName) { - let displayName = pathName; - const mediaTest = mediaLibrary.getMediaLibrary(); - let fileKeyObj = mediaLibrary.FileKey; - let mediaType = mediaLibrary.MediaType.VIDEO; - let publicPath = await mediaTest.getPublicDirectory(mediaLibrary.DirectoryType.DIR_VIDEO); - let dataUri = await mediaTest.createAsset(mediaType, displayName, publicPath); - if (dataUri != undefined) { - let args = dataUri.id.toString(); - let fetchOp = { - selections : fileKeyObj.ID + "=?", - selectionArgs : [args], - } - let fetchFileResult = await mediaTest.getFileAssets(fetchOp); - let fileAsset = await fetchFileResult.getAllObject(); - let fdNumber = await fileAsset[0].open('Rw'); - this.testFdNumber = "fd://" + fdNumber.toString(); - } - } - - // Error callback triggered in the case of an error - failureCallback(error) { - console.info('error happened, error name is ' + error.name); - console.info('error happened, error code is ' + error.code); - console.info('error happened, error message is ' + error.message); - } - - // Error callback triggered in the case of an exception - catchCallback(error) { - console.info('catch error happened, error name is ' + error.name); - console.info('catch error happened, error code is ' + error.code); - console.info('catch error happened, error message is ' + error.message); - } - - async videoRecorderDemo() { - let videoRecorder = null; // videoRecorder is an empty object and assigned with a value after createVideoRecorder is successfully called. - let surfaceID = null; // Used to save the surface ID returned by getInputSurface. - // Obtain the FD address of the video to be recorded. - await this.getFd('01.mp4'); - // Configure the parameters related to video recording based on those supported by the hardware device. - let videoProfile = { - audioBitrate : 48000, - audioChannels : 2, - audioCodec : 'audio/mp4a-latm', - audioSampleRate : 48000, - fileFormat : 'mp4', - videoBitrate : 2000000, - videoCodec : 'video/mp4v-es', - videoFrameWidth : 640, - videoFrameHeight : 480, - videoFrameRate : 30 - } - - let videoConfig = { - audioSourceType : 1, - videoSourceType : 0, - profile : videoProfile, - url : this.testFdNumber, // testFdNumber is generated by getFd. - orientationHint : 0, - location : { latitude : 30, longitude : 130 } - } - // Create a VideoRecorder object. - await media.createVideoRecorder().then((recorder) => { - console.info('case createVideoRecorder called'); - if (typeof (recorder) != 'undefined') { - videoRecorder = recorder; - console.info('createVideoRecorder success'); - } else { - console.info('createVideoRecorder failed'); - } - }, this.failureCallback).catch(this.catchCallback); - - // Call the prepare API to prepare for video recording. - await videoRecorder.prepare(videoConfig).then(() => { - console.info('prepare success'); - }, this.failureCallback).catch(this.catchCallback); - - // Obtain the surface ID, save it, and pass it to camera-related APIs. - await videoRecorder.getInputSurface().then((surface) => { - console.info('getInputSurface success'); - surfaceID = surface; - }, this.failureCallback).catch(this.catchCallback); - - // Video recording depends on camera-related APIs. The following operations can be performed only after the video output start API is invoked. For details about how to call the camera APIs, see the samples. - // Start video recording. - await videoRecorder.start().then(() => { - console.info('start success'); - }, this.failureCallback).catch(this.catchCallback); - - // Pause video recording before the video output stop API of the camera is invoked. - await videoRecorder.pause().then(() => { - console.info('pause success'); - }, this.failureCallback).catch(this.catchCallback); - - // Resume video recording after the video output start API of the camera is invoked. - await videoRecorder.resume().then(() => { - console.info('resume success'); - }, this.failureCallback).catch(this.catchCallback); - - // Stop video recording after the video output stop API of the camera is invoked. - await videoRecorder.stop().then(() => { - console.info('stop success'); - }, this.failureCallback).catch(this.catchCallback); - - // Reset the recording configuration. - await videoRecorder.reset().then(() => { - console.info('reset success'); - }, this.failureCallback).catch(this.catchCallback); - - // Release the video recording resources and camera object resources. - await videoRecorder.release().then(() => { - console.info('release success'); - }, this.failureCallback).catch(this.catchCallback); - - // Set the related object to null. - videoRecorder = undefined; - surfaceID = undefined; - } -} -``` - diff --git a/en/application-dev/media/video-recording.md b/en/application-dev/media/video-recording.md new file mode 100644 index 0000000000000000000000000000000000000000..229a3444dcd3771694ac60573f826063e7bf22a1 --- /dev/null +++ b/en/application-dev/media/video-recording.md @@ -0,0 +1,235 @@ +# Video Recording + +OpenHarmony provides the AVRecorder for you to develop the video recording service. The AVRecorder supports audio recording, audio encoding, video encoding, audio encapsulation, and video encapsulation. It is applicable to simple video recording scenarios and can be used to generate local video files directly. + +You will learn how to use the AVRecorder to complete the process of starting, pausing, resuming, and stopping recording. + +During application development, you can use the **state** attribute of the AVRecorder to obtain the AVRecorder state or call **on('stateChange')** to listen for state changes. Your code must meet the state machine requirements. For example, **pause()** is called only when the AVRecorder is in the **started** state, and **resume()** is called only when it is in the **paused** state. + +**Figure 1** Recording state transition + +![Recording state change](figures/video-recording-status-change.png) + +For details about the state, see [AVRecorderState](../reference/apis/js-apis-media.md#avrecorderstate9). + +## How to Develop + +> **NOTE** +> +> The AVRecorder only processes video data. To complete video recording, it must work with the video data collection module, which transfers the captured video data to the AVRecorder for data processing through the surface. A typical video data collection module is the camera module, which currently is available only to system applications. For details, see [Camera](../reference/apis/js-apis-camera.md). + +Read [AVRecorder](../reference/apis/js-apis-media.md#avrecorder9) for the API reference. + +1. Create an **AVRecorder** instance. The AVRecorder is the **idle** state. + + ```ts + import media from '@ohos.multimedia.media' + let avRecorder + media.createAVRecorder().then((recorder) => { + avRecorder = recorder + }, (error) => { + console.error('createAVRecorder failed') + }) + ``` + +2. Set the events to listen for. + | Event Type| Description| + | -------- | -------- | + | stateChange | Mandatory; used to listen for changes of the **state** attribute of the AVRecorder.| + | error | Mandatory; used to listen for AVRecorder errors.| + + ```ts + // Callback function for state changes. + avRecorder.on('stateChange', (state, reason) => { + console.info('current state is: ' + state); + }) + // Callback function for errors. + avRecorder.on('error', (err) => { + console.error('error happened, error message is ' + err); + }) + ``` + +3. Set video recording parameters and call **prepare()**. The AVRecorder enters the **prepared** state. + > **NOTE** + > + > Pay attention to the following when configuring parameters: + > + > - In pure video recording scenarios, set only video-related parameters in **avConfig** of **prepare()**. + > If audio-related parameters are configured, the system regards it as audio and video recording. + > + > - The [recording specifications](avplayer-avrecorder-overview.md#supported-formats) in use must be those supported. The video bit rate, resolution, and frame rate are subject to the ranges supported by the hardware device. + > + > - The recording output URL (URL in **avConfig** in the sample code) must be in the format of fd://xx (where xx indicates a file descriptor). You must call [ohos.file.fs](../reference/apis/js-apis-file-fs.md) to implement access to the application file. For details, see [Application File Access and Management](../file-management/app-file-access.md). + + ```ts + let avProfile = { + fileFormat: media.ContainerFormatType.CFT_MPEG_4, // Video file encapsulation format. Only MP4 is supported. + videoBitrate: 200000, // Video bit rate. + videoCodec: media.CodecMimeType.VIDEO_AVC, // Video file encoding format. Both MPEG-4 and AVC are supported. + videoFrameWidth: 640, // Video frame width. + videoFrameHeight: 480, // Video frame height. + videoFrameRate: 30 // Video frame rate. + } + let avConfig = { + videoSourceType: media.VideoSourceType.VIDEO_SOURCE_TYPE_SURFACE_YUV, // Video source type. YUV and ES are supported. + profile : this.avProfile, + url: 'fd://35', // Create, read, and write a file by referring to the sample code in Application File Access and Management. + rotation: 0, // Video rotation angle. The default value is 0, indicating that the video is not rotated. The value can be 0, 90, 180, or 270. + } + avRecorder.prepare(avConfig).then(() => { + console.info('avRecorder prepare success') + }, (error) => { + console.error('avRecorder prepare failed') + }) + ``` + +4. Obtain the surface ID required for video recording. + + Call **getInputSurface()**. The returned surface ID is transferred to the video data collection module (video input source), which is the camera module in the sample code. + + The video data collection module obtains the surface based on the surface ID and transmits video data to the AVRecorder through the surface. Then the AVRecorder processes the video data. + + ```ts + avRecorder.getInputSurface().then((surfaceId) => { + console.info('avRecorder getInputSurface success') + }, (error) => { + console.error('avRecorder getInputSurface failed') + }) + ``` + +5. Initialize the video data input source. + + This step is performed in the video data collection module. For the camera module, you need to create a **Camera** instance, obtain the camera list, create a camera input stream, and create a video output stream. For details, see [Recording](camera-recording-case.md). + +6. Start recording. + + Start the input source to input video data, for example, by calling **camera.VideoOutput.start**. Then call **AVRecorder.start()** to switch the AVRecorder to the **started** state. + +7. Call **pause()** to pause recording. The AVRecorder enters the **paused** state. In addition, pause data input in the video data collection module, for example, by calling **camera.VideoOutput.stop**. + +8. Call **resume()** to resume recording. The AVRecorder enters the **started** state again. + +9. Call **stop()** to stop recording. The AVRecorder enters the **stopped** state again. In addition, stop camera recording in the video data collection module. + +10. Call **reset()** to reset the resources. The AVRecorder enters the **idle** state. In this case, you can reconfigure the recording parameters. + +11. Call **release()** to release the resources. The AVRecorder enters the **released** state. In addition, release the video data input source resources (camera resources in this example). + + +## Sample Code + +Refer to the sample code below to complete the process of starting, pausing, resuming, and stopping recording. + + +```ts +import media from '@ohos.multimedia.media' +const TAG = 'VideoRecorderDemo:' +export class VideoRecorderDemo { + private avRecorder; + private videoOutSurfaceId; + private avProfile = { + fileFormat: media.ContainerFormatType.CFT_MPEG_4, // Video file encapsulation format. Only MP4 is supported. + videoBitrate : 100000, // Video bit rate. + videoCodec: media.CodecMimeType.VIDEO_AVC, // Video file encoding format. Both MPEG-4 and AVC are supported. + videoFrameWidth: 640, // Video frame width. + videoFrameHeight: 480, // Video frame height. + videoFrameRate: 30 // Video frame rate. + } + private avConfig = { + videoSourceType: media.VideoSourceType.VIDEO_SOURCE_TYPE_SURFACE_YUV, // Video source type. YUV and ES are supported. + profile : this.avProfile, + url: 'fd://35', // Create, read, and write a file by referring to the sample code in Application File Access and Management. + rotation: 0, // Video rotation angle. The default value is 0, indicating that the video is not rotated. The value can be 0, 90, 180, or 270. + } + + // Set AVRecorder callback functions. + setAvRecorderCallback() { + // Callback function for state changes. + this.avRecorder.on('stateChange', (state, reason) => { + console.info(TAG + 'current state is: ' + state); + }) + // Callback function for errors. + this.avRecorder.on('error', (err) => { + console.error(TAG + 'error ocConstantSourceNode, error message is ' + err); + }) + } + + // Complete camera-related preparations. + async prepareCamera() { + // For details on the implementation, see the camera document. + } + + // Start the camera stream output. + async startCameraOutput() { + // Call start of the VideoOutput class to start video output. + } + + // Stop the camera stream output. + async stopCameraOutput() { + // Call stop of the VideoOutput class to stop video output. + } + + // Release the camera instance. + async releaseCamera() { + // Release the instances created during camera preparation. + } + + // Process of starting recording. + async startRecordingProcess() { + // 1. Create an AVRecorder instance. + this.avRecorder = await media.createAVRecorder(); + this.setAvRecorderCallback(); + // 2. Obtain the file descriptor of the recorded file. The obtained file descriptor is passed in to the URL in avConfig. The implementation is omitted here. + // 3. Set recording parameters to complete the preparations. + await this.avRecorder.prepare(this.avConfig); + this.videoOutSurfaceId = await this.avRecorder.getInputSurface(); + // 4. Complete camera-related preparations. + await this.prepareCamera(); + // 5. Start the camera stream output. + await this.startCameraOutput(); + // 6. Start recording. + await this.videoRecorder.start(); + } + + // Process of pausing recording. + async pauseRecordingProcess() { + if (this.avRecorder.state ==='started') { // pause() can be called only when the AVRecorder is in the started state . + await this.avRecorder.pause(); + await this.stopCameraOutput(); // Stop the camera stream output. + } + } + + // Process of resuming recording. + async resumeRecordingProcess() { + if (this.avRecorder.state === 'paused') { // resume() can be called only when the AVRecorder is in the paused state . + await this.startCameraOutput(); // Start camera stream output. + await this.avRecorder.resume(); + } + } + + async stopRecordingProcess() { + // 1. Stop recording. + if (this.avRecorder.state === 'started' + || this.avRecorder.state ==='paused') { // stop() can be called only when the AVRecorder is in the started or paused state. + await this.avRecorder.stop(); + await this.stopCameraOutput(); + } + // 2. Reset the AVRecorder. + await this.avRecorder.reset(); + // 3. Release the AVRecorder instance. + await this.avRecorder.release(); + // 4. After the file is recorded, close the file descriptor. The implementation is omitted here. + // 5. Release the camera instance. + await this.releaseCamera(); + } + + // Complete sample code for starting, pausing, resuming, and stopping recording. + async videoRecorderDemo() { + await this.startRecordingProcess(); // Start recording. + // You can set the recording duration. For example, you can set the sleep mode to prevent code execution. + await this.pauseRecordingProcess(); // Pause recording. + await this.resumeRecordingProcess(); // Resume recording. + await this.stopRecordingProcess(); // Stop recording. + } +} +``` diff --git a/en/application-dev/media/volume-management.md b/en/application-dev/media/volume-management.md new file mode 100644 index 0000000000000000000000000000000000000000..f6461d968856c7d865c999ab9c604e5ef718548b --- /dev/null +++ b/en/application-dev/media/volume-management.md @@ -0,0 +1,48 @@ +# Volume Management + +You can use different APIs to manage the system volume and audio stream volume. The system volume and audio stream volume refer to the volume of a OpenHarmony device and the volume of a specified audio stream, respectively. The audio stream volume is restricted by the system volume. + +## System Volume + +The API for managing the system volume is **AudioVolumeManager**. Before using this API, you must call **getVolumeManager()** to obtain an **AudioVolumeManager** instance. Currently, this API can be used to obtain volume information and listen for volume changes. It cannot be used to adjust the system volume. + +```ts +import audio from '@ohos.multimedia.audio'; +let audioManager = audio.getAudioManager(); +let audioVolumeManager = audioManager.getVolumeManager(); +``` + +### Listening for System Volume Changes + +You can set an event to listen for system volume changes. + +```ts +audioVolumeManager.on('volumeChange', (volumeEvent) => { + console.info(`VolumeType of stream: ${volumeEvent.volumeType} `); + console.info(`Volume level: ${volumeEvent.volume} `); + console.info(`Whether to updateUI: ${volumeEvent.updateUi} `); +}); +``` + +### Adjusting the System Volume (for System Applications Only) + +Currently, the system volume is mainly adjusted by using system APIs, which serve the physical volume button and the Settings application. When the user presses the volume button, a system API is called to adjust the system volume, including the volume for media, ringtone, or notification. + +## Audio Stream Volume + +The API for managing the audio stream volume is **setVolume()** in the **AVPlayer** or **AudioRenderer** class. The code snippet below is used for setting the audio stream volume by using the **AVPlayer** class: + +```ts +let volume = 1.0 // Specified volume. The value range is [0.00-1.00]. The value 1 indicates the maximum volume. +avPlayer.setVolume(volume) +``` + +The code snippet below is used for setting the audio stream volume by using the **AudioRenderer** class: + +```ts +audioRenderer.setVolume(0.5).then(data=>{ // The volume range is [0.0-1.0]. + console.info('Invoke setVolume succeeded.'); +}).catch((err) => { + console.error(`Invoke setVolume failed, code is ${err.code}, message is ${err.message}`); +}); +```