audio-renderer.md 25.4 KB
Newer Older
W
wusongqing 已提交
1
# Audio Rendering Development
V
Vaidegi B 已提交
2

3
## Introduction
V
Vaidegi B 已提交
4

W
wusongqing 已提交
5
**AudioRenderer** provides APIs for rendering audio files and controlling playback. It also supports audio interruption. You can use the APIs provided by **AudioRenderer** to play audio files in output devices and manage playback tasks.
6
Before calling the APIs, be familiar with the following terms:
V
Vaidegi B 已提交
7

8 9 10
- **Audio interruption**: When an audio stream with a higher priority needs to be played, the audio renderer interrupts the stream with a lower priority. For example, if a call comes in when the user is listening to music, the music playback, which is the lower priority stream, is paused.
- **Status check**: During application development, you are advised to use **on('stateChange')** to subscribe to state changes of the **AudioRenderer** instance. This is because some operations can be performed only when the audio renderer is in a given state. If the application performs an operation when the audio renderer is not in the given state, the system may throw an exception or generate other undefined behavior.
- **Asynchronous operation**: To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the promise functions. For more information, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8).
11
- **Audio interruption mode**: OpenHarmony provides two audio interruption modes: **shared mode** and **independent mode**. In shared mode, all **AudioRenderer** instances created by the same application share one focus object, and there is no focus transfer inside the application. Therefore, no callback will be triggered. In independent mode, each **AudioRenderer** instance has an independent focus object, and focus transfer is triggered by focus preemption. When focus transfer occurs, the **AudioRenderer** instance that is having the focus receives a notification through the callback. By default, the shared mode is used. You can call **setInterruptMode()** to switch to the independent mode.
W
wusongqing 已提交
12

13
## Working Principles
W
wusongqing 已提交
14

15
The following figure shows the audio renderer state transitions.
W
wusongqing 已提交
16

17
**Figure 1** Audio renderer state transitions
W
wusongqing 已提交
18

19
![audio-renderer-state](figures/audio-renderer-state.png)
W
wusongqing 已提交
20

21
- **PREPARED**: The audio renderer enters this state by calling **create()**.
22

23
- **RUNNING**: The audio renderer enters this state by calling **start()** when it is in the **PREPARED** state or by calling **start()** when it is in the **STOPPED** state.
24 25 26 27 28 29

- **PAUSED**: The audio renderer enters this state by calling **pause()** when it is in the **RUNNING** state. When the audio playback is paused, it can call **start()** to resume the playback.

- **STOPPED**: The audio renderer enters this state by calling **stop()** when it is in the **PAUSED** or **RUNNING** state.

- **RELEASED**: The audio renderer enters this state by calling **release()** when it is in the **PREPARED**, **PAUSED**, or **STOPPED** state. In this state, the audio renderer releases all occupied hardware and software resources and will not transit to any other state.
W
wusongqing 已提交
30 31 32

## How to Develop

33 34
For details about the APIs, see [AudioRenderer in Audio Management](../reference/apis/js-apis-audio.md#audiorenderer8).

W
wusongqing 已提交
35
1. Use **createAudioRenderer()** to create an **AudioRenderer** instance.
36 37 38
   

Set parameters of the **AudioRenderer** instance in **audioRendererOptions**. This instance is used to render audio, control and obtain the rendering status, and register a callback for notification.
W
wusongqing 已提交
39 40

   ```js
41 42 43
 import audio from '@ohos.multimedia.audio';
   
    let audioStreamInfo = {
44 45
        samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
        channels: audio.AudioChannel.CHANNEL_1,
W
wusongqing 已提交
46
        sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
47 48
        encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
    }
49
    let audioRendererInfo = {
50 51
        content: audio.ContentType.CONTENT_TYPE_SPEECH,
        usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION,
52
        rendererFlags: 0 // 0 is the extended flag bit of the audio renderer. The default value is 0.
53
    }
54
    let audioRendererOptions = {
55 56
        streamInfo: audioStreamInfo,
        rendererInfo: audioRendererInfo
57
 }
W
wusongqing 已提交
58
   
59
    let audioRenderer = await audio.createAudioRenderer(audioRendererOptions);
60
    console.log("Create audio renderer success.");
V
Vaidegi B 已提交
61 62
   ```

63
2. Use **start()** to start audio rendering.
W
wusongqing 已提交
64 65

   ```js
66 67
   async function startRenderer() {
     let state = audioRenderer.state;
68
     // The audio renderer should be in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state when start() is called.
69 70 71 72 73
     if (state != audio.AudioState.STATE_PREPARED && state != audio.AudioState.STATE_PAUSED &&
       state != audio.AudioState.STATE_STOPPED) {
       console.info('Renderer is not in a correct state to start');
       return;
     }
W
wusongqing 已提交
74
   
75
     await audioRenderer.start();
W
wusongqing 已提交
76
   
77 78 79 80 81 82 83 84
     state = audioRenderer.state;
     if (state == audio.AudioState.STATE_RUNNING) {
       console.info('Renderer started');
     } else {
       console.error('Renderer start failed');
     }
   }
   ```
W
wusongqing 已提交
85
   The renderer state will be **STATE_RUNNING** once the audio renderer is started. The application can then begin reading buffers.
V
Vaidegi B 已提交
86

W
wusongqing 已提交
87

88
3. Call **write()** to write data to the buffer.
W
wusongqing 已提交
89 90 91 92

   Read the audio data to be played to the buffer. Call **write()** repeatedly to write the data to the buffer.

   ```js
93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140
   import fileio from '@ohos.fileio';
   import audio from '@ohos.multimedia.audio';

   async function writeBuffer(buf) {
     // The write operation can be performed only when the state is STATE_RUNNING.
     if (audioRenderer.state != audio.AudioState.STATE_RUNNING) {
       console.error('Renderer is not running, do not write');
       return;
     }
     let writtenbytes = await audioRenderer.write(buf);
     console.info(`Actual written bytes: ${writtenbytes} `);
     if (writtenbytes < 0) {
       console.error('Write buffer failed. check the state of renderer');
     }
   }
    
   // Set a proper buffer size for the audio renderer. You can also select a buffer of another size.
   const bufferSize = await audioRenderer.getBufferSize();
   let dir = globalThis.fileDir; // You must use the sandbox path.
   const path = dir + '/file_example_WAV_2MG.wav'; // The file to render is in the following path: /data/storage/el2/base/haps/entry/files/file_example_WAV_2MG.wav
   console.info(`file path: ${ path}`);
   let ss = fileio.createStreamSync(path, 'r');
   const totalSize = fileio.statSync(path).size; // Size of the file to render.
   let discardHeader = new ArrayBuffer(bufferSize);
   ss.readSync(discardHeader);
   let rlen = 0;
   rlen += bufferSize;
    
   let id = setInterval(() => {
     if (audioRenderer.state == audio.AudioState.STATE_RELEASED) { // The rendering stops if the audio renderer is in the STATE_RELEASED state.
       ss.closeSync();
       await audioRenderer.stop();
       clearInterval(id);
     }
     if (audioRenderer.state == audio.AudioState.STATE_RUNNING) {
       if (rlen >= totalSize) { // The rendering stops if the file finishes reading.
         ss.closeSync();
         await audioRenderer.stop();
         clearInterval(id);
       }
       let buf = new ArrayBuffer(bufferSize);
       rlen += ss.readSync(buf);
       console.info(`Total bytes read from file: ${rlen}`);
       writeBuffer(buf);
     } else {
       console.info('check after next interval');
     }
   }, 30); // The timer interval is set based on the audio format. The unit is millisecond.
V
Vaidegi B 已提交
141 142
   ```

143
4. (Optional) Call **pause()** or **stop()** to pause or stop rendering.
W
wusongqing 已提交
144 145

   ```js
W
wusongqing 已提交
146
    async function pauseRenderer() {
147 148 149 150 151 152 153 154 155 156 157 158 159 160 161
      let state = audioRenderer.state;
      // The audio renderer can be paused only when it is in the STATE_RUNNING state.
      if (state != audio.AudioState.STATE_RUNNING) {
        console.info('Renderer is not running');
        return;
      }

      await audioRenderer.pause();

      state = audioRenderer.state;
      if (state == audio.AudioState.STATE_PAUSED) {
        console.info('Renderer paused');
      } else {
        console.error('Renderer pause failed');
      }
W
wusongqing 已提交
162
    }
163

W
wusongqing 已提交
164
    async function stopRenderer() {
165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197
      let state = audioRenderer.state;
      // The audio renderer can be stopped only when it is in STATE_RUNNING or STATE_PAUSED state.
      if (state != audio.AudioState.STATE_RUNNING && state != audio.AudioState.STATE_PAUSED) {
        console.info('Renderer is not running or paused');
        return;
      }

      await audioRenderer.stop();

      state = audioRenderer.state;
      if (state == audio.AudioState.STATE_STOPPED) {
        console.info('Renderer stopped');
      } else {
        console.error('Renderer stop failed');
      }
    }
   ```

5. (Optional) Call **drain()** to clear the buffer.

   ```js
    async function drainRenderer() {
      let state = audioRenderer.state;
      // drain() can be used only when the audio renderer is in the STATE_RUNNING state.
      if (state != audio.AudioState.STATE_RUNNING) {
        console.info('Renderer is not running');
        return;
      }

      await audioRenderer.drain();

      state = audioRenderer.state;
    }
W
wusongqing 已提交
198
   ```
V
Vaidegi B 已提交
199

W
wusongqing 已提交
200 201 202 203 204
6. After the task is complete, call **release()** to release related resources.

   **AudioRenderer** uses a large number of system resources. Therefore, ensure that the resources are released after the task is complete.

   ```js
W
wusongqing 已提交
205
    async function releaseRenderer() {
206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222
      let state = audioRenderer.state;
      // The audio renderer can be released only when it is not in the STATE_RELEASED or STATE_NEW state.
      if (state == audio.AudioState.STATE_RELEASED || state == audio.AudioState.STATE_NEW) {
        console.info('Renderer already released');
        return;
      }

      await audioRenderer.release();

      state = audioRenderer.state;
      if (state == audio.AudioState.STATE_RELEASED) {
        console.info('Renderer released');
      } else {
        console.info('Renderer release failed');
      }
    }
   ```
V
Vaidegi B 已提交
223

224 225 226
7. (Optional) Obtain the audio renderer information.
   
   You can use the following code to obtain the audio renderer information:
W
wusongqing 已提交
227

228 229 230 231 232 233
   ```js
   // Obtain the audio renderer state.
   let state = audioRenderer.state;
   
   // Obtain the audio renderer information.
   let audioRendererInfo : audio.AudioRendererInfo = await audioRenderer.getRendererInfo();
W
wusongqing 已提交
234

235 236 237 238 239 240 241 242 243 244 245 246 247 248
   // Obtain the audio stream information.
   let audioStreamInfo : audio.AudioStreamInfo = await audioRenderer.getStreamInfo();

   // Obtain the audio stream ID.
   let audioStreamId : number = await audioRenderer.getAudioStreamId();

   // Obtain the Unix timestamp, in nanoseconds.
   let audioTime : number = await audioRenderer.getAudioTime();

   // Obtain a proper minimum buffer size.
   let bufferSize : number = await audioRenderer.getBuffersize();

   // Obtain the audio renderer rate.
   let renderRate : audio.AudioRendererRate = await audioRenderer.getRenderRate();
W
wusongqing 已提交
249
   ```
250 251 252 253 254 255 256 257 258 259 260 261 262 263

8. (Optional) Set the audio renderer information.
   
   You can use the following code to set the audio renderer information:

   ```js
   // Set the audio renderer rate to RENDER_RATE_NORMAL.
   let renderRate : audio.AudioRendererRate = audio.AudioRendererRate.RENDER_RATE_NORMAL;
   await audioRenderer.setRenderRate(renderRate);

   // Set the interruption mode of the audio renderer to SHARE_MODE.
   let interruptMode : audio.InterruptMode = audio.InterruptMode.SHARE_MODE;
   await audioRenderer.setInterruptMode(interruptMode);

264 265
   // Set the volume of the stream to 0.5.
   let volume : number = 0.5;
266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308
   await audioRenderer.setVolume(volume);
   ```

9. (Optional) Use **on('audioInterrupt')** to subscribe to the audio interruption event, and use **off('audioInterrupt')** to unsubscribe from the event.

   Audio interruption means that Stream A will be interrupted when Stream B with a higher or equal priority requests to become active and use the output device.

   In some cases, the audio renderer performs forcible operations such as pausing and ducking, and notifies the application through **InterruptEvent**. In other cases, the application can choose to act on the **InterruptEvent** or ignore it.

   In the case of audio interruption, the application may encounter write failures. To avoid such failures, interruption-unaware applications can use **audioRenderer.state** to check the audio renderer state before writing audio data. The applications can obtain more details by subscribing to the audio interruption events. For details, see [InterruptEvent](../reference/apis/js-apis-audio.md#interruptevent9).

   It should be noted that the audio interruption event subscription of the **AudioRenderer** module is slightly different from **on('interrupt')** in [AudioManager](../reference/apis/js-apis-audio.md#audiomanager). The **on('interrupt')** and **off('interrupt')** APIs are deprecated since API version 9. In the **AudioRenderer** module, you only need to call **on('audioInterrupt')** to listen for focus change events. When the **AudioRenderer** instance created by the application performs actions such as start, stop, and pause, it requests the focus, which triggers focus transfer and in return enables the related **AudioRenderer** instance to receive a notification through the callback. For instances other than **AudioRenderer**, such as frequency modulation (FM) and voice wakeup, the application does not create an instance. In this case, the application can call **on('interrupt')** in **AudioManager** to receive a focus change notification.
   
   ```js
   audioRenderer.on('audioInterrupt', (interruptEvent) => {
     console.info('InterruptEvent Received');
     console.info(`InterruptType: ${interruptEvent.eventType}`);
     console.info(`InterruptForceType: ${interruptEvent.forceType}`);
     console.info(`AInterruptHint: ${interruptEvent.hintType}`);

     if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_FORCE) {
       switch (interruptEvent.hintType) {
         // Forcible pausing initiated by the audio framework. To prevent data loss, stop the write operation.
         case audio.InterruptHint.INTERRUPT_HINT_PAUSE:
           isPlay = false;
           break;
         // Forcible stopping initiated by the audio framework. To prevent data loss, stop the write operation.
         case audio.InterruptHint.INTERRUPT_HINT_STOP:
           isPlay = false;
           break;
         // Forcible ducking initiated by the audio framework.
         case audio.InterruptHint.INTERRUPT_HINT_DUCK:
           break;
         // Undocking initiated by the audio framework.
         case audio.InterruptHint.INTERRUPT_HINT_UNDUCK:
           break;
       }
     } else if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_SHARE) {
       switch (interruptEvent.hintType) {
         // Notify the application that the rendering starts.
         case audio.InterruptHint.INTERRUPT_HINT_RESUME:
           startRenderer();
           break;
309
         // Notify the application that the audio stream is interrupted. The application then determines whether to continue. (In this example, the application pauses the rendering.)
310 311 312 313 314 315 316 317
         case audio.InterruptHint.INTERRUPT_HINT_PAUSE:
           isPlay = false;
           pauseRenderer();
           break;
       }
     }
   });

318
   audioRenderer.off('audioInterrupt'); // Unsubscribe from the audio interruption event. This event will no longer be listened for.
319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363
   ```

10. (Optional) Use **on('markReach')** to subscribe to the mark reached event, and use **off('markReach')** to unsubscribe from the event.

    After the mark reached event is subscribed to, when the number of frames rendered by the audio renderer reaches the specified value, a callback is triggered and the specified value is returned.
   
    ```js
    audioRenderer.on('markReach', (reachNumber) => {
      console.info('Mark reach event Received');
      console.info(`The renderer reached frame: ${reachNumber}`);
    });

    audioRenderer.off('markReach'); // Unsubscribe from the mark reached event. This event will no longer be listened for.
    ```

11. (Optional) Use **on('periodReach')** to subscribe to the period reached event, and use **off('periodReach')** to unsubscribe from the event.

    After the period reached event is subscribed to, each time the number of frames rendered by the audio renderer reaches the specified value, a callback is triggered and the specified value is returned.
   
    ```js
    audioRenderer.on('periodReach', (reachNumber) => {
      console.info('Period reach event Received');
      console.info(`In this period, the renderer reached frame: ${reachNumber} `);
    });

    audioRenderer.off('periodReach'); // Unsubscribe from the period reached event. This event will no longer be listened for.
    ```

12. (Optional) Use **on('stateChange')** to subscribe to audio renderer state changes.

    After the **stateChange** event is subscribed to, when the audio renderer state changes, a callback is triggered and the audio renderer state is returned.
   
    ```js
    audioRenderer.on('stateChange', (audioState) => {
      console.info('State change event Received');
      console.info(`Current renderer state is: ${audioState}`);
    });
    ```

13. (Optional) Handle exceptions of **on()**.

    If the string or the parameter type passed in **on()** is incorrect , the application throws an exception. In this case, you can use **try catch** to capture the exception.
   
    ```js
    try {
364
      audioRenderer.on('invalidInput', () => { // The string is invalid.
365 366 367 368 369 370 371 372 373 374 375 376 377 378
      })
    } catch (err) {
      console.info(`Call on function error, ${err}`); // The application throws exception 401.
    }
    try {
      audioRenderer.on(1, () => { // The type of the input parameter is incorrect.
      })
    } catch (err) {
      console.info(`Call on function error,  ${err}`); // The application throws exception 6800101.
    }
    ```

14. (Optional) Refer to the complete example of **on('audioInterrupt')**.
    
379
    Create **AudioRender1** and **AudioRender2** in an application, configure the independent interruption mode, and call **on('audioInterrupt')** to subscribe to audio interruption events. At the beginning, **AudioRender1** has the focus. When **AudioRender2** attempts to obtain the focus, **AudioRender1** receives a focus transfer notification and the related log information is printed. If the shared mode is used, the log information will not be printed during application running.
380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434
    
    ```js
     async runningAudioRender1(){
       let audioStreamInfo = {
         samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_48000,
         channels: audio.AudioChannel.CHANNEL_1,
         sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S32LE,
         encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
       }
       let audioRendererInfo = {
         content: audio.ContentType.CONTENT_TYPE_MUSIC,
         usage: audio.StreamUsage.STREAM_USAGE_MEDIA,
         rendererFlags: 0 // 0 is the extended flag bit of the audio renderer. The default value is 0.
       }
       let audioRendererOptions = {
         streamInfo: audioStreamInfo,
     rendererInfo: audioRendererInfo
       }
    
       // 1.1 Create an instance.
      audioRenderer1 = await audio.createAudioRenderer(audioRendererOptions);
       console.info("Create audio renderer 1 success.");
    
       // 1.2 Set the independent mode.
       audioRenderer1.setInterruptMode(1).then( data => {
         console.info('audioRenderer1 setInterruptMode Success!');
       }).catch((err) => {
     console.error(`audioRenderer1 setInterruptMode Fail: ${err}`);
       });
    
       // 1.3 Set the listener.
       audioRenderer1.on('audioInterrupt', async(interruptEvent) => {
     console.info(`audioRenderer1 on audioInterrupt : ${JSON.stringify(interruptEvent)}`)
       });
    
       // 1.4 Start rendering.
      await audioRenderer1.start();
       console.info('startAudioRender1 success');
    
       // 1.5 Obtain the buffer size, which is the proper minimum buffer size of the audio renderer. You can also select a buffer of another size.
      const bufferSize = await audioRenderer1.getBufferSize();
       console.info(`audio bufferSize: ${bufferSize}`);
    
       // 1.6 Obtain the original audio data file.
       let dir = globalThis.fileDir; // You must use the sandbox path.
       const path1 = dir + '/music001_48000_32_1.wav'; // The file to render is in the following path: /data/storage/el2/base/haps/entry/files/music001_48000_32_1.wav
       console.info(`audioRender1 file path: ${ path1}`);
       let ss1 = await fileio.createStream(path1,'r');
       const totalSize1 = fileio.statSync(path1).size; // Size of the file to render.
       console.info(`totalSize1   -------: ${totalSize1}`);
       let discardHeader = new ArrayBuffer(bufferSize);
       ss1.readSync(discardHeader);
      let rlen = 0;
       rlen += bufferSize;
    
435
       // 1.7 Render the original audio data in the buffer by using audioRender.
436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510
       let id = setInterval(async () => {
         if (audioRenderer1.state == audio.AudioState.STATE_RELEASED) { // The rendering stops if the audio renderer is in the STATE_RELEASED state.
           ss1.closeSync();
           audioRenderer1.stop();
           clearInterval(id);
         }
         if (audioRenderer1.state == audio.AudioState.STATE_RUNNING) {
           if (rlen >= totalSize1) { // The rendering stops if the file finishes reading.
             ss1.closeSync();
             await audioRenderer1.stop();
             clearInterval(id);
           }
           let buf = new ArrayBuffer(bufferSize);
           rlen += ss1.readSync(buf);
           console.info(`Total bytes read from file: ${rlen}`);
           await writeBuffer(buf, that.audioRenderer1);
         } else {
           console.info('check after next interval');
         }
      }, 30); // The timer interval is set based on the audio format. The unit is millisecond.
     }
    
     async runningAudioRender2(){
       let audioStreamInfo = {
         samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_48000,
         channels: audio.AudioChannel.CHANNEL_1,
         sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S32LE,
         encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
       }
       let audioRendererInfo = {
         content: audio.ContentType.CONTENT_TYPE_MUSIC,
         usage: audio.StreamUsage.STREAM_USAGE_MEDIA,
         rendererFlags: 0 // 0 is the extended flag bit of the audio renderer. The default value is 0.
       }
       let audioRendererOptions = {
         streamInfo: audioStreamInfo,
     rendererInfo: audioRendererInfo
       }
    
       // 2.1 Create another instance.
      audioRenderer2 = await audio.createAudioRenderer(audioRendererOptions);
       console.info("Create audio renderer 2 success.");
    
       // 2.2 Set the independent mode.
       audioRenderer2.setInterruptMode(1).then( data => {
         console.info('audioRenderer2 setInterruptMode Success!');
       }).catch((err) => {
     console.error(`audioRenderer2 setInterruptMode Fail: ${err}`);
       });
    
       // 2.3 Set the listener.
       audioRenderer2.on('audioInterrupt', async(interruptEvent) => {
     console.info(`audioRenderer2 on audioInterrupt : ${JSON.stringify(interruptEvent)}`)
       });
    
       // 2.4 Start rendering.
      await audioRenderer2.start();
       console.info('startAudioRender2 success');
    
       // 2.5 Obtain the buffer size.
      const bufferSize = await audioRenderer2.getBufferSize();
       console.info(`audio bufferSize: ${bufferSize}`);
    
       // 2.6 Read the original audio data file.
       let dir = globalThis.fileDir; // You must use the sandbox path.
       const path2 = dir + '/music002_48000_32_1.wav'; // The file to render is in the following path: /data/storage/el2/base/haps/entry/files/music002_48000_32_1.wav
       console.error(`audioRender1 file path: ${ path2}`);
       let ss2 = await fileio.createStream(path2,'r');
       const totalSize2 = fileio.statSync(path2).size; // Size of the file to render.
       console.error(`totalSize2   -------: ${totalSize2}`);
       let discardHeader2 = new ArrayBuffer(bufferSize);
       ss2.readSync(discardHeader2);
      let rlen = 0;
       rlen += bufferSize;
    
511
       // 2.7 Render the original audio data in the buffer by using audioRender.
512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551
       let id = setInterval(async () => {
         if (audioRenderer2.state == audio.AudioState.STATE_RELEASED) { // The rendering stops if the audio renderer is in the STATE_RELEASED state.
           ss2.closeSync();
           that.audioRenderer2.stop();
           clearInterval(id);
         }
         if (audioRenderer1.state == audio.AudioState.STATE_RUNNING) {
           if (rlen >= totalSize2) { // The rendering stops if the file finishes reading.
             ss2.closeSync();
             await audioRenderer2.stop();
             clearInterval(id);
           }
           let buf = new ArrayBuffer(bufferSize);
           rlen += ss2.readSync(buf);
           console.info(`Total bytes read from file: ${rlen}`);
           await writeBuffer(buf, that.audioRenderer2);
         } else {
           console.info('check after next interval');
         }
      }, 30); // The timer interval is set based on the audio format. The unit is millisecond.
     }
    
     async writeBuffer(buf, audioRender) {
       let writtenbytes;
       await audioRender.write(buf).then((value) => {
         writtenbytes = value;
         console.info(`Actual written bytes: ${writtenbytes} `);
       });
       if (typeof(writtenbytes) != 'number' || writtenbytes < 0) {
         console.error('get Write buffer failed. check the state of renderer');
      }
     }
    
     // Integrated invoking entry.
     async test(){
       await runningAudioRender1();
      await runningAudioRender2();
     }
    
    ```