audio-renderer.md 12.7 KB
Newer Older
W
wusongqing 已提交
1
# Audio Rendering Development
V
Vaidegi B 已提交
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

---
## ***Note***:
    1. This document applies to JavaScript.
---
## **Summary**
This guide will show you how to use AudioRenderer to create an audio player app.
You can use the APIs provided in this document to play audio files in output devices and manage playback tasks.

## **AudioRenderer Framework**
The AudioRenderer interface is one of the most important components of the audio framework.
### **Audio Rendering:**
The AudioRenderer framework provides APIs for playing audio files and controlling the playback.
### **Audio Interruption:**
When a higher priority stream wants to play, the AudioRenderer framework interrupts the lower priority stream.\
For example, if a call is arrived when you listen to music, the music playback, which is the lower priority stream, is paused.\
With the sample code below, we'll look at how AudioInterrupt works in detail.\
<br/>
W
wusongqing 已提交
20
Please see [AudioRenderer in the Audio API](../reference/apis/js-apis-audio.md#audiorenderer8) for a list of supported audio stream types and formats, such as AudioSampleFormat, AudioChannel, AudioSampleFormat, and AudioEncodingType.
V
Vaidegi B 已提交
21 22 23 24


## **Usage**
Here's an example of how to use AudioRenderer to play a raw audio file.
25
1. Use **createAudioRenderer** to create an AudioRenderer instance. Renderer parameters can be set in **audioRendererOptions**.\
V
Vaidegi B 已提交
26 27
   This object can be used to play, control, and obtain the status of the playback, as well as receive callback notifications.
   ```
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
    var audioStreamInfo = {
        samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
        channels: audio.AudioChannel.CHANNEL_1,
        sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
        encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
    }

    var audioRendererInfo = {
        content: audio.ContentType.CONTENT_TYPE_SPEECH,
        usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION,
        rendererFlags: 1
    }

    var audioRendererOptions = {
        streamInfo: audioStreamInfo,
        rendererInfo: audioRendererInfo
    }

    let audioRenderer = await audio.createAudioRenderer(audioRendererOptions);
V
Vaidegi B 已提交
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136
   ```

2. Subscribe to audio interruption events using the **on** API.\
   Stream-A is interrupted when Stream-B with a higher or equal priority requests to become active and use the output device.\
   In some cases, the framework takes forced actions like pausing and ducking, and notifies the app using **InterruptEvent**. In other cases, the app can take action. In this situation, the app can choose to act on the **InterruptEvent** or ignore it. When the app is interrupted by forced action, it should handle the state, update the user interface, and so on.

   In case of audio interrupts, the app may encounter write failures. Interrupt unaware apps can check the renderer state using the **audioRenderer.state** API before writing audio data, whereas interrupt aware apps will have more details accessible via this listener.\
   <br/>
   The following information will be provided by the Interrupt Event Notification:

    1) **eventType:** Whether the interruption has begun or ended.

       | Value                | Description                                   |
       | :------------------- | :-------------------------------------------- |
       | INTERRUPT_TYPE_BEGIN | Indicates that the interruption has started.  |
       | INTERRUPT_TYPE_END   | Indicates that the interruption has finished. |

    2) **forceType:** Whether the framework has already taken action or if the app is being suggested to take action.

       | Value           | Description                                                         |
       | :-------------- | :------------------------------------------------------------------ |
       | INTERRUPT_FORCE | The audio state has been changed by the framework.                  |
       | INTERRUPT_SHARE | The app can decide whether or not to respond to the InterruptEvent. |

    3) **hintType:** The kind of action taken or to be taken.

       | Value                 | Description                  |
       | :-------------------- | :--------------------------- |
       | INTERRUPT_HINT_PAUSE  | Pausing the playback.        |
       | INTERRUPT_HINT_RESUME | Resuming the playback.       |
       | INTERRUPT_HINT_STOP   | Stopping the playback.       |
       | INTERRUPT_HINT_DUCK   | Ducking the stream volume.   |
       | INTERRUPT_HINT_UNDUCK | Unducking the stream volume. |

    4) **Some actions are exclusively forced or shared**, which means that they are performed by either the framework or the app.\
       For instance, when a call is received while a music stream is ongoing, the framework forces the music stream to pause. When the call is finished, the framework will not forcibly resume the music stream. Instead, it will alert the app to resume the playback.

       | Action                | Description                                                                       |
       | :-------------------- | :-------------------------------------------------------------------------------- |
       | INTERRUPT_HINT_RESUME | INTERRUPT_SHARE is always the forceType. It can only be done by the app.     |
       | INTERRUPT_HINT_DUCK   | INTERRUPT_FORCE is always the forceType. It will always be done by the framework. |
       | INTERRUPT_HINT_UNDUCK | INTERRUPT_FORCE is always the forceType. It will always be done by the framework. |

   ```
    audioRenderer.on('interrupt', (interruptEvent) => {
        console.info('InterruptEvent Received');
        console.info('InterruptType: ' + interruptEvent.eventType);
        console.info('InterruptForceType: ' + interruptEvent.forceType);
        console.info('AInterruptHint: ' + interruptEvent.hintType);

        if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_FORCE) {
            switch (interruptEvent.hintType) {
                // Force Pause: Action was taken by framework.
                // Halt the write calls to avoid data loss.
                case audio.InterruptHint.INTERRUPT_HINT_PAUSE:
                    isPlay = false;
                    break;
                // Force Stop: Action was taken by framework.
                // Halt the write calls to avoid data loss.
                case audio.InterruptHint.INTERRUPT_HINT_STOP:
                    isPlay = false;
                    break;
                // Force Duck: Action was taken by framework,
                // just notifying the app that volume has been reduced.
                case audio.InterruptHint.INTERRUPT_HINT_DUCK:
                    break;
                // Force Unduck: Action was taken by framework,
                // just notifying the app that volume has been restored.
                case audio.InterruptHint.INTERRUPT_HINT_UNDUCK:
                    break;
            }
        } else if (interruptEvent.forceType == audio.InterruptForceType.INTERRUPT_SHARE) {
            switch (interruptEvent.hintType) {
                // Share Resume: Action is to be taken by App.
                // Resume the force paused stream if required.
                case audio.InterruptHint.INTERRUPT_HINT_RESUME:
                    startRenderer();
                    break;
                // Share Pause: Stream has been interrupted,
                // It can choose to pause or play concurrently.
                case audio.InterruptHint.INTERRUPT_HINT_PAUSE:
                    isPlay = false;
                    pauseRenderer();
                    break;
            }
        }
    });
   ```

4. Call the **start()** function on the AudioRenderer instance to start/resume the playback task.\
137
   The renderer state will be STATE_RUNNING once the start is complete. You can then begin writing buffers.
V
Vaidegi B 已提交
138 139 140 141 142 143 144 145 146
   ```
    async function startRenderer() {
        var state = audioRenderer.state;
        // state should be prepared, paused or stopped.
        if (state != audio.AudioState.STATE_PREPARED || state != audio.AudioState.STATE_PAUSED ||
            state != audio.AudioState.STATE_STOPPED) {
            console.info('Renderer is not in a correct state to start');
            return;
        }
147 148 149 150 151

        await audioRenderer.start();

        state = audioRenderer.state;
        if (state == audio.AudioState.STATE_RUNNING) {
V
Vaidegi B 已提交
152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211
            console.info('Renderer started');
        } else {
            console.error('Renderer start failed');
        }
    }

   ```
5. Make **write** calls to start rendering the buffers.
   Read the audio data to be played into a buffer. Call the write function repeatedly to write data.
   ```
    async function writeBuffer(buf) {
        var state = audioRenderer.state;
        if (state != audio.AudioState.STATE_RUNNING) {
            console.error('Renderer is not running, do not write');
            isPlay = false;
            return;
        }
        let writtenbytes = await audioRenderer.write(buf);

        console.info('Actual written bytes: ' + writtenbytes);
        if (writtenbytes < 0) {
            console.error('Write buffer failed. check the state of renderer');
        }
    }

    // Reasonable minimum buffer size for renderer. However, the renderer can accept other read sizes as well.
    const bufferSize = await audioRenderer.getBufferSize();
    const path = '/data/file_example_WAV_2MG.wav';
    let ss = fileio.createStreamSync(path, 'r');
    const totalSize = 2146166; // file_example_WAV_2MG.wav
    let rlen = 0;
    let discardHeader = new ArrayBuffer(44);
    ss.readSync(discardHeader);
    rlen += 44;

    var id = setInterval(() =>  {
        if (isPlay || isRelease) {
            if (rlen >= totalSize || isRelease) {
                ss.closeSync();
                stopRenderer();
                clearInterval(id);
            }
            let buf = new ArrayBuffer(bufferSize);
            rlen += ss.readSync(buf);
            console.info('Total bytes read from file: ' + rlen);
            writeBuffer(buf);
        } else {
            console.info('check after next interval');
        }
    } , 30); // interval to be set based on audio file format
   ```

6. (Optional) Call the **pause()** or **stop()** function on the AudioRenderer instance.
```
    async function pauseRenderer() {
        var state = audioRenderer.state;
        if (state != audio.AudioState.STATE_RUNNING) {
            console.info('Renderer is not running');
            return;
        }
212 213 214 215 216

        await audioRenderer.pause();

        state = audioRenderer.state;
        if (state == audio.AudioState.STATE_PAUSED) {
V
Vaidegi B 已提交
217 218 219 220 221 222 223 224 225 226 227 228
            console.info('Renderer paused');
        } else {
            console.error('Renderer pause failed');
        }
    }

    async function stopRenderer() {
        var state = audioRenderer.state;
        if (state != audio.AudioState.STATE_RUNNING || state != audio.AudioState.STATE_PAUSED) {
            console.info('Renderer is not running or paused');
            return;
        }
229 230 231 232 233

        await audioRenderer.stop();

        state = audioRenderer.state;
        if (state == audio.AudioState.STATE_STOPPED) {
V
Vaidegi B 已提交
234 235 236 237 238 239 240 241 242 243 244 245 246 247 248
            console.info('Renderer stopped');
        } else {
            console.error('Renderer stop failed');
        }
}
```

7. After the playback task is complete, call the **release()** function on the AudioRenderer instance to release resources.\
   AudioRenderer can use a lot of system resources. As a result, whenever the resources are no longer required, they must be released. To ensure that any system resources allocated to it are appropriately released, you should always call **release()**.
```
    async function releaseRenderer() {
        if (state_ == RELEASED || state_ == NEW) {
            console.info('Resourced already released');
            return;
        }
249 250 251 252 253

        await audioRenderer.release();

        state = audioRenderer.state;
        if (state == STATE_RELEASED) {
V
Vaidegi B 已提交
254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269
            console.info('Renderer released');
        } else {
            console.info('Renderer release failed');
        }

    }
```

## **Importance of State Check:**
You should also keep in mind that an AudioRenderer is state-based.
That is, the AudioRenderer has an internal state that you must always check when calling playback control APIs, because some operations are only acceptable while the renderer is in a given state.\
The system may throw an error/exception or generate other undefined behaviour if you perform an operation while in the improper state.\

## **Asynchronous Operations:**
Most of the AudioRenderer calls are asynchronous. As a result, the UI thread will not be blocked.\
For each API, the framework provides both callback and promises functions.\
W
wusongqing 已提交
270
In the example, promise functions are used for simplicity. [AudioRender in the Audio API](../reference/apis/js-apis-audio.md#audiorenderer8)
V
Vaidegi B 已提交
271 272 273
provides reference for both callback and promise.

## **Other APIs:**
W
wusongqing 已提交
274
See [Audio](../reference/apis/js-apis-audio.md) for more useful APIs like getAudioTime, drain, and getBufferSize.