提交 2b08d7e2 编写于 作者: G Gloria

Update docs against 15756+15924+15757+15843+15912

Signed-off-by: wusongqing<wusongqing@huawei.com>
上级 5cc343cf
...@@ -21,38 +21,49 @@ This following figure shows the audio capturer state transitions. ...@@ -21,38 +21,49 @@ This following figure shows the audio capturer state transitions.
## Constraints ## Constraints
Before developing the audio data collection feature, configure the **ohos.permission.MICROPHONE** permission for your application. For details, see [Permission Application Guide](../security/accesstoken-guidelines.md). Before developing the audio data collection feature, configure the **ohos.permission.MICROPHONE** permission for your application. For details, see [Permission Application Guide](../security/accesstoken-guidelines.md#declaring-permissions-in-the-configuration-file).
## How to Develop ## How to Develop
For details about the APIs, see [AudioCapturer in Audio Management](../reference/apis/js-apis-audio.md#audiocapturer8). For details about the APIs, see [AudioCapturer in Audio Management](../reference/apis/js-apis-audio.md#audiocapturer8).
1. Use **createAudioCapturer()** to create an **AudioCapturer** instance. 1. Use **createAudioCapturer()** to create a global **AudioCapturer** instance.
Set parameters of the **AudioCapturer** instance in **audioCapturerOptions**. This instance is used to capture audio, control and obtain the recording state, and register a callback for notification. Set parameters of the **AudioCapturer** instance in **audioCapturerOptions**. This instance is used to capture audio, control and obtain the recording state, and register a callback for notification.
```js ```js
import audio from '@ohos.multimedia.audio'; import audio from '@ohos.multimedia.audio';
import fs from '@ohos.file.fs'; // It will be used for the call of the read function in step 3.
// Perform a self-test on APIs related to audio rendering.
@Entry
@Component
struct AudioRenderer {
@State message: string = 'Hello World'
private audioCapturer: audio.AudioCapturer; // It will be called globally.
async initAudioCapturer(){
let audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
}
let audioCapturerInfo = {
source: audio.SourceType.SOURCE_TYPE_MIC,
capturerFlags: 0 // 0 is the extended flag bit of the audio capturer. The default value is 0.
}
let audioCapturerOptions = {
streamInfo: audioStreamInfo,
capturerInfo: audioCapturerInfo
}
this.audioCapturer = await audio.createAudioCapturer(audioCapturerOptions);
console.log('AudioRecLog: Create audio capturer success.');
}
let audioStreamInfo = {
samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
channels: audio.AudioChannel.CHANNEL_1,
sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
}
let audioCapturerInfo = {
source: audio.SourceType.SOURCE_TYPE_MIC,
capturerFlags: 0 // 0 is the extended flag bit of the audio capturer. The default value is 0.
}
let audioCapturerOptions = {
streamInfo: audioStreamInfo,
capturerInfo: audioCapturerInfo
}
let audioCapturer = await audio.createAudioCapturer(audioCapturerOptions);
console.log('AudioRecLog: Create audio capturer success.');
``` ```
2. Use **start()** to start audio recording. 2. Use **start()** to start audio recording.
...@@ -60,23 +71,18 @@ For details about the APIs, see [AudioCapturer in Audio Management](../reference ...@@ -60,23 +71,18 @@ For details about the APIs, see [AudioCapturer in Audio Management](../reference
The capturer state will be **STATE_RUNNING** once the audio capturer is started. The application can then begin reading buffers. The capturer state will be **STATE_RUNNING** once the audio capturer is started. The application can then begin reading buffers.
```js ```js
import audio from '@ohos.multimedia.audio'; async startCapturer() {
let state = this.audioCapturer.state;
async function startCapturer() {
let state = audioCapturer.state;
// The audio capturer should be in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state after being started. // The audio capturer should be in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state after being started.
if (state != audio.AudioState.STATE_PREPARED || state != audio.AudioState.STATE_PAUSED || if (state == audio.AudioState.STATE_PREPARED || state == audio.AudioState.STATE_PAUSED ||
state != audio.AudioState.STATE_STOPPED) { state == audio.AudioState.STATE_STOPPED) {
console.info('Capturer is not in a correct state to start'); await this.audioCapturer.start();
return; state = this.audioCapturer.state;
} if (state == audio.AudioState.STATE_RUNNING) {
await audioCapturer.start(); console.info('AudioRecLog: Capturer started');
} else {
state = audioCapturer.state; console.error('AudioRecLog: Capturer start failed');
if (state == audio.AudioState.STATE_RUNNING) { }
console.info('AudioRecLog: Capturer started');
} else {
console.error('AudioRecLog: Capturer start failed');
} }
} }
``` ```
...@@ -86,91 +92,88 @@ For details about the APIs, see [AudioCapturer in Audio Management](../reference ...@@ -86,91 +92,88 @@ For details about the APIs, see [AudioCapturer in Audio Management](../reference
The following example shows how to write recorded data into a file. The following example shows how to write recorded data into a file.
```js ```js
import fs from '@ohos.file.fs'; async readData(){
let state = this.audioCapturer.state;
let state = audioCapturer.state; // The read operation can be performed only when the state is STATE_RUNNING.
// The read operation can be performed only when the state is STATE_RUNNING. if (state != audio.AudioState.STATE_RUNNING) {
if (state != audio.AudioState.STATE_RUNNING) { console.info('Capturer is not in a correct state to read');
console.info('Capturer is not in a correct state to read'); return;
return;
}
const path = '/data/data/.pulse_dir/capture_js.wav'; // Path for storing the collected audio file.
let file = fs.openSync(filePath, 0o2);
let fd = file.fd;
if (file !== null) {
console.info('AudioRecLog: file created');
} else {
console.info('AudioRecLog: file create : FAILED');
return;
}
if (fd !== null) {
console.info('AudioRecLog: file fd opened in append mode');
}
let numBuffersToCapture = 150; // Write data for 150 times.
let count = 0;
while (numBuffersToCapture) {
let bufferSize = await audioCapturer.getBufferSize();
let buffer = await audioCapturer.read(bufferSize, true);
let options = {
offset: count * this.bufferSize,
length: this.bufferSize
} }
if (typeof(buffer) == undefined) { const path = '/data/data/.pulse_dir/capture_js.wav'; // Path for storing the collected audio file.
console.info('AudioRecLog: read buffer failed'); let file = fs.openSync(path, 0o2);
let fd = file.fd;
if (file !== null) {
console.info('AudioRecLog: file created');
} else { } else {
let number = fs.writeSync(fd, buffer, options); console.info('AudioRecLog: file create : FAILED');
console.info(`AudioRecLog: data written: ${number}`); return;
} }
numBuffersToCapture--; if (fd !== null) {
count++; console.info('AudioRecLog: file fd opened in append mode');
}
let numBuffersToCapture = 150; // Write data for 150 times.
let count = 0;
while (numBuffersToCapture) {
this.bufferSize = await this.audioCapturer.getBufferSize();
let buffer = await this.audioCapturer.read(this.bufferSize, true);
let options = {
offset: count * this.bufferSize,
length: this.bufferSize
}
if (typeof(buffer) == undefined) {
console.info('AudioRecLog: read buffer failed');
} else {
let number = fs.writeSync(fd, buffer, options);
console.info(`AudioRecLog: data written: ${number}`);
}
numBuffersToCapture--;
count++;
}
} }
``` ```
4. Once the recording is complete, call **stop()** to stop the recording. 4. Once the recording is complete, call **stop()** to stop the recording.
```js ```js
async function StopCapturer() { async StopCapturer() {
let state = audioCapturer.state; let state = this.audioCapturer.state;
// The audio capturer can be stopped only when it is in STATE_RUNNING or STATE_PAUSED state. // The audio capturer can be stopped only when it is in STATE_RUNNING or STATE_PAUSED state.
if (state != audio.AudioState.STATE_RUNNING && state != audio.AudioState.STATE_PAUSED) { if (state != audio.AudioState.STATE_RUNNING && state != audio.AudioState.STATE_PAUSED) {
console.info('AudioRecLog: Capturer is not running or paused'); console.info('AudioRecLog: Capturer is not running or paused');
return; return;
} }
await audioCapturer.stop(); await this.audioCapturer.stop();
state = audioCapturer.state; state = this.audioCapturer.state;
if (state == audio.AudioState.STATE_STOPPED) { if (state == audio.AudioState.STATE_STOPPED) {
console.info('AudioRecLog: Capturer stopped'); console.info('AudioRecLog: Capturer stopped');
} else { } else {
console.error('AudioRecLog: Capturer stop failed'); console.error('AudioRecLog: Capturer stop failed');
} }
} }
``` ```
5. After the task is complete, call **release()** to release related resources. 5. After the task is complete, call **release()** to release related resources.
```js ```js
async function releaseCapturer() { async releaseCapturer() {
let state = audioCapturer.state; let state = this.audioCapturer.state;
// The audio capturer can be released only when it is not in the STATE_RELEASED or STATE_NEW state. // The audio capturer can be released only when it is not in the STATE_RELEASED or STATE_NEW state.
if (state == audio.AudioState.STATE_RELEASED || state == audio.AudioState.STATE_NEW) { if (state == audio.AudioState.STATE_RELEASED || state == audio.AudioState.STATE_NEW) {
console.info('AudioRecLog: Capturer already released'); console.info('AudioRecLog: Capturer already released');
return; return;
} }
await audioCapturer.release(); await this.audioCapturer.release();
state = audioCapturer.state; state = this.audioCapturer.state;
if (state == audio.AudioState.STATE_RELEASED) { if (state == audio.AudioState.STATE_RELEASED) {
console.info('AudioRecLog: Capturer released'); console.info('AudioRecLog: Capturer released');
} else { } else {
console.info('AudioRecLog: Capturer release failed'); console.info('AudioRecLog: Capturer release failed');
} }
} }
``` ```
6. (Optional) Obtain the audio capturer information. 6. (Optional) Obtain the audio capturer information.
...@@ -178,23 +181,20 @@ For details about the APIs, see [AudioCapturer in Audio Management](../reference ...@@ -178,23 +181,20 @@ For details about the APIs, see [AudioCapturer in Audio Management](../reference
You can use the following code to obtain the audio capturer information: You can use the following code to obtain the audio capturer information:
```js ```js
// Obtain the audio capturer state. async getAudioCapturerInfo(){
let state = audioCapturer.state; // Obtain the audio capturer state.
let state = this.audioCapturer.state;
// Obtain the audio capturer information. // Obtain the audio capturer information.
let audioCapturerInfo : audio.AuduioCapturerInfo = await audioCapturer.getCapturerInfo(); let audioCapturerInfo : audio.AudioCapturerInfo = await this.audioCapturer.getCapturerInfo();
// Obtain the audio stream information.
// Obtain the audio stream information. let audioStreamInfo : audio.AudioStreamInfo = await this.audioCapturer.getStreamInfo();
let audioStreamInfo : audio.AudioStreamInfo = await audioCapturer.getStreamInfo(); // Obtain the audio stream ID.
let audioStreamId : number = await this.audioCapturer.getAudioStreamId();
// Obtain the audio stream ID. // Obtain the Unix timestamp, in nanoseconds.
let audioStreamId : number = await audioCapturer.getAudioStreamId(); let audioTime : number = await this.audioCapturer.getAudioTime();
// Obtain a proper minimum buffer size.
// Obtain the Unix timestamp, in nanoseconds. let bufferSize : number = await this.audioCapturer.getBufferSize();
let audioTime : number = await audioCapturer.getAudioTime(); }
// Obtain a proper minimum buffer size.
let bufferSize : number = await audioCapturer.getBufferSize();
``` ```
7. (Optional) Use **on('markReach')** to subscribe to the mark reached event, and use **off('markReach')** to unsubscribe from the event. 7. (Optional) Use **on('markReach')** to subscribe to the mark reached event, and use **off('markReach')** to unsubscribe from the event.
...@@ -202,12 +202,13 @@ For details about the APIs, see [AudioCapturer in Audio Management](../reference ...@@ -202,12 +202,13 @@ For details about the APIs, see [AudioCapturer in Audio Management](../reference
After the mark reached event is subscribed to, when the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned. After the mark reached event is subscribed to, when the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned.
```js ```js
audioCapturer.on('markReach', (reachNumber) => { async markReach(){
console.info('Mark reach event Received'); this.audioCapturer.on('markReach', 10, (reachNumber) => {
console.info(`The Capturer reached frame: ${reachNumber}`); console.info('Mark reach event Received');
}); console.info(`The Capturer reached frame: ${reachNumber}`);
});
audioCapturer.off('markReach'); // Unsubscribe from the mark reached event. This event will no longer be listened for. this.audioCapturer.off('markReach'); // Unsubscribe from the mark reached event. This event will no longer be listened for.
}
``` ```
8. (Optional) Use **on('periodReach')** to subscribe to the period reached event, and use **off('periodReach')** to unsubscribe from the event. 8. (Optional) Use **on('periodReach')** to subscribe to the period reached event, and use **off('periodReach')** to unsubscribe from the event.
...@@ -215,40 +216,43 @@ For details about the APIs, see [AudioCapturer in Audio Management](../reference ...@@ -215,40 +216,43 @@ For details about the APIs, see [AudioCapturer in Audio Management](../reference
After the period reached event is subscribed to, each time the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned. After the period reached event is subscribed to, each time the number of frames collected by the audio capturer reaches the specified value, a callback is triggered and the specified value is returned.
```js ```js
audioCapturer.on('periodReach', (reachNumber) => { async periodReach(){
console.info('Period reach event Received'); this.audioCapturer.on('periodReach', 10, (reachNumber) => {
console.info(`In this period, the Capturer reached frame: ${reachNumber}`); console.info('Period reach event Received');
}); console.info(`In this period, the Capturer reached frame: ${reachNumber}`);
});
audioCapturer.off('periodReach'); // Unsubscribe from the period reached event. This event will no longer be listened for. this.audioCapturer.off('periodReach'); // Unsubscribe from the period reached event. This event will no longer be listened for.
}
``` ```
9. If your application needs to perform some operations when the audio capturer state is updated, it can subscribe to the state change event. When the audio capturer state is updated, the application receives a callback containing the event type. 9. If your application needs to perform some operations when the audio capturer state is updated, it can subscribe to the state change event. When the audio capturer state is updated, the application receives a callback containing the event type.
```js ```js
audioCapturer.on('stateChange', (state) => { async stateChange(){
console.info(`AudioCapturerLog: Changed State to : ${state}`) this.audioCapturer.on('stateChange', (state) => {
switch (state) { console.info(`AudioCapturerLog: Changed State to : ${state}`)
case audio.AudioState.STATE_PREPARED: switch (state) {
console.info('--------CHANGE IN AUDIO STATE----------PREPARED--------------'); case audio.AudioState.STATE_PREPARED:
console.info('Audio State is : Prepared'); console.info('--------CHANGE IN AUDIO STATE----------PREPARED--------------');
break; console.info('Audio State is : Prepared');
case audio.AudioState.STATE_RUNNING: break;
console.info('--------CHANGE IN AUDIO STATE----------RUNNING--------------'); case audio.AudioState.STATE_RUNNING:
console.info('Audio State is : Running'); console.info('--------CHANGE IN AUDIO STATE----------RUNNING--------------');
break; console.info('Audio State is : Running');
case audio.AudioState.STATE_STOPPED: break;
console.info('--------CHANGE IN AUDIO STATE----------STOPPED--------------'); case audio.AudioState.STATE_STOPPED:
console.info('Audio State is : stopped'); console.info('--------CHANGE IN AUDIO STATE----------STOPPED--------------');
break; console.info('Audio State is : stopped');
case audio.AudioState.STATE_RELEASED: break;
console.info('--------CHANGE IN AUDIO STATE----------RELEASED--------------'); case audio.AudioState.STATE_RELEASED:
console.info('Audio State is : released'); console.info('--------CHANGE IN AUDIO STATE----------RELEASED--------------');
break; console.info('Audio State is : released');
default: break;
console.info('--------CHANGE IN AUDIO STATE----------INVALID--------------'); default:
console.info('Audio State is : invalid'); console.info('--------CHANGE IN AUDIO STATE----------INVALID--------------');
break; console.info('Audio State is : invalid');
} break;
}); }
});
}
``` ```
...@@ -292,13 +292,13 @@ export class AVPlayerDemo { ...@@ -292,13 +292,13 @@ export class AVPlayerDemo {
async avPlayerDemo() { async avPlayerDemo() {
// Create an AVPlayer instance. // Create an AVPlayer instance.
this.avPlayer = await media.createAVPlayer() this.avPlayer = await media.createAVPlayer()
let fdPath = 'fd://' let fileDescriptor = undefined
let pathDir = "/data/storage/el2/base/haps/entry/files" // The path used here is an example. Obtain the path based on project requirements. // Use getRawFileDescriptor of the resource management module to obtain the media assets in the application, and use the fdSrc attribute of the AVPlayer to initialize the media asset.
// The stream in the path can be pushed to the device by running the "hdc file send D:\xxx\H264_AAC.mp4 /data/app/el2/100/base/ohos.acts.multimedia.media.avplayer/haps/entry/files" command. // For details on the fd/offset/length parameter, see the Media API. The globalThis.abilityContext parameter is a system environment variable and is saved as a global variable on the main page during the system boost.
let path = pathDir + '/H264_AAC.mp4' await globalThis.abilityContext.resourceManager.getRawFileDescriptor('H264_AAC.mp4').then((value) => {
let file = await fs.open(path) fileDescriptor = {fd: value.fd, offset: value.offset, length: value.length}
fdPath = fdPath + '' + file.fd })
this.avPlayer.url = fdPath this.avPlayer.fdSrc = fileDescriptor
} }
} }
``` ```
......
# AVSession Overview # AVSession Overview
> **NOTE**
>
> All APIs of the **AVSession** module are system APIs and can be called only by system applications.
## Overview ## Overview
AVSession, short for audio and video session, is also known as media session. AVSession, short for audio and video session, is also known as media session.
...@@ -49,4 +53,4 @@ The **AVSession** module provides two classes: **AVSession** and **AVSessionCont ...@@ -49,4 +53,4 @@ The **AVSession** module provides two classes: **AVSession** and **AVSessionCont
- AVSession can transmit media playback information and control commands. It does not display information or execute control commands. - AVSession can transmit media playback information and control commands. It does not display information or execute control commands.
- Do not develop Media Controller for common applications. For common audio and video applications running on OpenHarmony, the default control end is Media Controller, which is a system application. You do not need to carry out additional development for Media Controller. - Do not develop Media Controller for common applications. For common audio and video applications running on OpenHarmony, the default control end is Media Controller, which is a system application. You do not need to carry out additional development for Media Controller.
- If you want to develop your own system running OpenHarmony, you can develop your own Media Controller. - If you want to develop your own system running OpenHarmony, you can develop your own Media Controller.
- For better background management of audio and video applications, the **AVSession** module enforces background control for third-party applications. Only third-party applications that have accessed AVSession can play audio in the background. Otherwise, the system forcibly pauses the playback when a third-party application switches to the background. - For better background management of audio and video applications, the **AVSession** module enforces background control for applications. Only applications that have accessed AVSession can play audio in the background. Otherwise, the system forcibly pauses the playback when an application switches to the background.
...@@ -23,9 +23,9 @@ import audio from '@ohos.multimedia.audio'; ...@@ -23,9 +23,9 @@ import audio from '@ohos.multimedia.audio';
| Name | Type | Readable | Writable| Description | | Name | Type | Readable | Writable| Description |
| --------------------------------------- | ----------| ---- | ---- | ------------------ | | --------------------------------------- | ----------| ---- | ---- | ------------------ |
| LOCAL_NETWORK_ID<sup>9+</sup> | string | Yes | No | Network ID of the local device.<br>This is a system API.<br>**System capability**: SystemCapability.Multimedia.Audio.Device | | LOCAL_NETWORK_ID<sup>9+</sup> | string | Yes | No | Network ID of the local device.<br>This is a system API.<br> **System capability**: SystemCapability.Multimedia.Audio.Device |
| DEFAULT_VOLUME_GROUP_ID<sup>9+</sup> | number | Yes | No | Default volume group ID.<br>**System capability**: SystemCapability.Multimedia.Audio.Volume | | DEFAULT_VOLUME_GROUP_ID<sup>9+</sup> | number | Yes | No | Default volume group ID.<br> **System capability**: SystemCapability.Multimedia.Audio.Volume |
| DEFAULT_INTERRUPT_GROUP_ID<sup>9+</sup> | number | Yes | No | Default audio interruption group ID.<br>**System capability**: SystemCapability.Multimedia.Audio.Interrupt | | DEFAULT_INTERRUPT_GROUP_ID<sup>9+</sup> | number | Yes | No | Default audio interruption group ID.<br> **System capability**: SystemCapability.Multimedia.Audio.Interrupt |
**Example** **Example**
...@@ -349,7 +349,10 @@ Enumerates the audio stream types. ...@@ -349,7 +349,10 @@ Enumerates the audio stream types.
| VOICE_CALL<sup>8+</sup> | 0 | Audio stream for voice calls.| | VOICE_CALL<sup>8+</sup> | 0 | Audio stream for voice calls.|
| RINGTONE | 2 | Audio stream for ringtones. | | RINGTONE | 2 | Audio stream for ringtones. |
| MEDIA | 3 | Audio stream for media purpose. | | MEDIA | 3 | Audio stream for media purpose. |
| ALARM<sup>10+</sup> | 4 | Audio stream for alarming. |
| ACCESSIBILITY<sup>10+</sup> | 5 | Audio stream for accessibility. |
| VOICE_ASSISTANT<sup>8+</sup> | 9 | Audio stream for voice assistant.| | VOICE_ASSISTANT<sup>8+</sup> | 9 | Audio stream for voice assistant.|
| ULTRASONIC<sup>10+</sup> | 10 | Audio stream for ultrasonic.<br>This is a system API.|
| ALL<sup>9+</sup> | 100 | All public audio streams.<br>This is a system API.| | ALL<sup>9+</sup> | 100 | All public audio streams.<br>This is a system API.|
## InterruptRequestResultType<sup>9+</sup> ## InterruptRequestResultType<sup>9+</sup>
...@@ -531,7 +534,7 @@ Enumerates the audio content types. ...@@ -531,7 +534,7 @@ Enumerates the audio content types.
| CONTENT_TYPE_MOVIE | 3 | Movie. | | CONTENT_TYPE_MOVIE | 3 | Movie. |
| CONTENT_TYPE_SONIFICATION | 4 | Notification tone. | | CONTENT_TYPE_SONIFICATION | 4 | Notification tone. |
| CONTENT_TYPE_RINGTONE<sup>8+</sup> | 5 | Ringtone. | | CONTENT_TYPE_RINGTONE<sup>8+</sup> | 5 | Ringtone. |
| CONTENT_TYPE_ULTRASONIC<sup>10+</sup>| 9 | Ultrasonic.<br>This is a system API.|
## StreamUsage ## StreamUsage
Enumerates the audio stream usage. Enumerates the audio stream usage.
...@@ -544,7 +547,10 @@ Enumerates the audio stream usage. ...@@ -544,7 +547,10 @@ Enumerates the audio stream usage.
| STREAM_USAGE_MEDIA | 1 | Used for media. | | STREAM_USAGE_MEDIA | 1 | Used for media. |
| STREAM_USAGE_VOICE_COMMUNICATION | 2 | Used for voice communication.| | STREAM_USAGE_VOICE_COMMUNICATION | 2 | Used for voice communication.|
| STREAM_USAGE_VOICE_ASSISTANT<sup>9+</sup> | 3 | Used for voice assistant.| | STREAM_USAGE_VOICE_ASSISTANT<sup>9+</sup> | 3 | Used for voice assistant.|
| STREAM_USAGE_ALARM<sup>10+</sup> | 4 | Used for alarming. |
| STREAM_USAGE_NOTIFICATION_RINGTONE | 6 | Used for notification.| | STREAM_USAGE_NOTIFICATION_RINGTONE | 6 | Used for notification.|
| STREAM_USAGE_ACCESSIBILITY<sup>10+</sup> | 8 | Used for accessibility. |
| STREAM_USAGE_SYSTEM<sup>10+</sup> | 9 | System tone (such as screen lock or keypad tone).<br>This is a system API.|
## InterruptRequestType<sup>9+</sup> ## InterruptRequestType<sup>9+</sup>
...@@ -1757,7 +1763,7 @@ Sets a device to the active state. This API uses an asynchronous callback to ret ...@@ -1757,7 +1763,7 @@ Sets a device to the active state. This API uses an asynchronous callback to ret
| Name | Type | Mandatory| Description | | Name | Type | Mandatory| Description |
| ---------- | ------------------------------------- | ---- | ------------------------ | | ---------- | ------------------------------------- | ---- | ------------------------ |
| deviceType | [ActiveDeviceType](#activedevicetypedeprecated) | Yes | Active audio device type. | | deviceType | [ActiveDeviceType](#activedevicetypedeprecated) | Yes | Active audio device type. |
| active | boolean | Yes | Active state to set. The value **true** means to set the device to the active state, and **false** means the opposite. | | active | boolean | Yes | Active state to set. The value **true** means to set the device to the active state, and **false** means the opposite. |
| callback | AsyncCallback&lt;void&gt; | Yes | Callback used to return the result.| | callback | AsyncCallback&lt;void&gt; | Yes | Callback used to return the result.|
...@@ -1789,7 +1795,7 @@ Sets a device to the active state. This API uses a promise to return the result. ...@@ -1789,7 +1795,7 @@ Sets a device to the active state. This API uses a promise to return the result.
| Name | Type | Mandatory| Description | | Name | Type | Mandatory| Description |
| ---------- | ------------------------------------- | ---- | ------------------ | | ---------- | ------------------------------------- | ---- | ------------------ |
| deviceType | [ActiveDeviceType](#activedevicetypedeprecated) | Yes | Active audio device type. | | deviceType | [ActiveDeviceType](#activedevicetypedeprecated) | Yes | Active audio device type.|
| active | boolean | Yes | Active state to set. The value **true** means to set the device to the active state, and **false** means the opposite. | | active | boolean | Yes | Active state to set. The value **true** means to set the device to the active state, and **false** means the opposite. |
**Return value** **Return value**
...@@ -1823,7 +1829,7 @@ Checks whether a device is active. This API uses an asynchronous callback to ret ...@@ -1823,7 +1829,7 @@ Checks whether a device is active. This API uses an asynchronous callback to ret
| Name | Type | Mandatory| Description | | Name | Type | Mandatory| Description |
| ---------- | ------------------------------------- | ---- | ------------------------ | | ---------- | ------------------------------------- | ---- | ------------------------ |
| deviceType | [ActiveDeviceType](#activedevicetypedeprecated) | Yes | Active audio device type. | | deviceType | [ActiveDeviceType](#activedevicetypedeprecated) | Yes | Active audio device type. |
| callback | AsyncCallback&lt;boolean&gt; | Yes | Callback used to return the active state of the device.| | callback | AsyncCallback&lt;boolean&gt; | Yes | Callback used to return the active state of the device.|
**Example** **Example**
...@@ -1854,7 +1860,7 @@ Checks whether a device is active. This API uses a promise to return the result. ...@@ -1854,7 +1860,7 @@ Checks whether a device is active. This API uses a promise to return the result.
| Name | Type | Mandatory| Description | | Name | Type | Mandatory| Description |
| ---------- | ------------------------------------- | ---- | ------------------ | | ---------- | ------------------------------------- | ---- | ------------------ |
| deviceType | [ActiveDeviceType](#activedevicetypedeprecated) | Yes | Active audio device type. | | deviceType | [ActiveDeviceType](#activedevicetypedeprecated) | Yes | Active audio device type.|
**Return value** **Return value**
...@@ -4568,15 +4574,15 @@ let filePath = path + '/StarWars10s-2C-48000-4SW.wav'; ...@@ -4568,15 +4574,15 @@ let filePath = path + '/StarWars10s-2C-48000-4SW.wav';
let file = fs.openSync(filePath, fs.OpenMode.READ_ONLY); let file = fs.openSync(filePath, fs.OpenMode.READ_ONLY);
let stat = await fs.stat(path); let stat = await fs.stat(path);
let buf = new ArrayBuffer(bufferSize); let buf = new ArrayBuffer(bufferSize);
let len = stat.size % this.bufferSize == 0 ? Math.floor(stat.size / this.bufferSize) : Math.floor(stat.size / this.bufferSize + 1); let len = stat.size % bufferSize == 0 ? Math.floor(stat.size / bufferSize) : Math.floor(stat.size / bufferSize + 1);
for (let i = 0;i < len; i++) { for (let i = 0;i < len; i++) {
let options = { let options = {
offset: i * this.bufferSize, offset: i * bufferSize,
length: this.bufferSize length: bufferSize
} }
let readsize = await fs.read(file.fd, buf, options) let readsize = await fs.read(file.fd, buf, options)
let writeSize = await new Promise((resolve,reject)=>{ let writeSize = await new Promise((resolve,reject)=>{
this.audioRenderer.write(buf,(err,writeSize)=>{ audioRenderer.write(buf,(err,writeSize)=>{
if(err){ if(err){
reject(err) reject(err)
}else{ }else{
...@@ -4585,6 +4591,7 @@ for (let i = 0;i < len; i++) { ...@@ -4585,6 +4591,7 @@ for (let i = 0;i < len; i++) {
}) })
}) })
} }
``` ```
### write<sup>8+</sup> ### write<sup>8+</sup>
...@@ -4621,15 +4628,15 @@ let filePath = path + '/StarWars10s-2C-48000-4SW.wav'; ...@@ -4621,15 +4628,15 @@ let filePath = path + '/StarWars10s-2C-48000-4SW.wav';
let file = fs.openSync(filePath, fs.OpenMode.READ_ONLY); let file = fs.openSync(filePath, fs.OpenMode.READ_ONLY);
let stat = await fs.stat(path); let stat = await fs.stat(path);
let buf = new ArrayBuffer(bufferSize); let buf = new ArrayBuffer(bufferSize);
let len = stat.size % this.bufferSize == 0 ? Math.floor(stat.size / this.bufferSize) : Math.floor(stat.size / this.bufferSize + 1); let len = stat.size % bufferSize == 0 ? Math.floor(stat.size / bufferSize) : Math.floor(stat.size / bufferSize + 1);
for (let i = 0;i < len; i++) { for (let i = 0;i < len; i++) {
let options = { let options = {
offset: i * this.bufferSize, offset: i * bufferSize,
length: this.bufferSize length: bufferSize
} }
let readsize = await fs.read(file.fd, buf, options) let readsize = await fs.read(file.fd, buf, options)
try{ try{
let writeSize = await this.audioRenderer.write(buf); let writeSize = await audioRenderer.write(buf);
} catch(err) { } catch(err) {
console.error(`audioRenderer.write err: ${err}`); console.error(`audioRenderer.write err: ${err}`);
} }
...@@ -4969,7 +4976,7 @@ For details about the error codes, see [Audio Error Codes](../errorcodes/errorco ...@@ -4969,7 +4976,7 @@ For details about the error codes, see [Audio Error Codes](../errorcodes/errorco
| ID | Error Message | | ID | Error Message |
| ------- | ------------------------------ | | ------- | ------------------------------ |
| 6800101 | if input parameter value error | | 6800101 | if input parameter value error |
**Example** **Example**
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册