Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
OpenHarmony
Docs
提交
2d1006a6
D
Docs
项目概览
OpenHarmony
/
Docs
大约 2 年 前同步成功
通知
161
Star
293
Fork
28
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
Docs
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
2d1006a6
编写于
11月 07, 2022
作者:
O
openharmony_ci
提交者:
Gitee
11月 07, 2022
浏览文件
操作
浏览文件
下载
差异文件
!11222 开发指南补充与优化
Merge pull request !11222 from 一杯丞丞汁儿/master
上级
22fece3f
b004e650
变更
7
隐藏空白更改
内联
并排
Showing
7 changed file
with
684 addition
and
227 deletion
+684
-227
zh-cn/application-dev/media/audio-capturer.md
zh-cn/application-dev/media/audio-capturer.md
+157
-71
zh-cn/application-dev/media/audio-renderer.md
zh-cn/application-dev/media/audio-renderer.md
+286
-154
zh-cn/application-dev/media/audio-routing-manager.md
zh-cn/application-dev/media/audio-routing-manager.md
+112
-0
zh-cn/application-dev/media/audio-volume-manager.md
zh-cn/application-dev/media/audio-volume-manager.md
+127
-0
zh-cn/application-dev/media/figures/zh-ch_image_audio_routing_manager.png
...n-dev/media/figures/zh-ch_image_audio_routing_manager.png
+0
-0
zh-cn/application-dev/media/figures/zh-ch_image_audio_volume_manager.png
...on-dev/media/figures/zh-ch_image_audio_volume_manager.png
+0
-0
zh-cn/application-dev/reference/apis/js-apis-audio.md
zh-cn/application-dev/reference/apis/js-apis-audio.md
+2
-2
未找到文件。
zh-cn/application-dev/media/audio-capturer.md
浏览文件 @
2d1006a6
...
...
@@ -32,86 +32,70 @@ AudioCapturer提供了用于获取原始音频文件的方法。开发者可以
在audioCapturerOptions中设置音频采集器的相关参数。该实例可用于音频采集、控制和获取采集状态,以及注册通知回调。
```
js
var
audioStreamInfo
=
{
import
audio
from
'
@ohos.multimedia.audio
'
;
let
audioStreamInfo
=
{
samplingRate
:
audio
.
AudioSamplingRate
.
SAMPLE_RATE_44100
,
channels
:
audio
.
AudioChannel
.
CHANNEL_1
,
sampleFormat
:
audio
.
AudioSampleFormat
.
SAMPLE_FORMAT_S16LE
,
encodingType
:
audio
.
AudioEncodingType
.
ENCODING_TYPE_RAW
}
var
audioCapturerInfo
=
{
let
audioCapturerInfo
=
{
source
:
audio
.
SourceType
.
SOURCE_TYPE_MIC
,
capturerFlags
:
1
capturerFlags
:
0
// 0是音频采集器的扩展标志位,默认为0
}
var
audioCapturerOptions
=
{
let
audioCapturerOptions
=
{
streamInfo
:
audioStreamInfo
,
capturerInfo
:
audioCapturerInfo
}
let
audioCapturer
=
await
audio
.
createAudioCapturer
(
audioCapturerOptions
);
var
state
=
audioRenderer
.
state
;
console
.
log
(
'
AudioRecLog: Create audio capturer success.
'
)
;
```
2.
(可选)使用on('stateChange')订阅音频采集器状态更改。
如果应用需要在采集器状态更新时进行一些操作,可以订阅该事件。更多事件请参考
[
API参考文档
](
../reference/apis/js-apis-audio.md
)
。
```
js
audioCapturer
.
on
(
'
stateChange
'
,(
state
)
=>
{
console
.
info
(
'
AudioCapturerLog: Changed State to :
'
+
state
)
switch
(
state
)
{
case
audio
.
AudioState
.
STATE_PREPARED
:
console
.
info
(
'
--------CHANGE IN AUDIO STATE----------PREPARED--------------
'
);
console
.
info
(
'
Audio State is : Prepared
'
);
break
;
case
audio
.
AudioState
.
STATE_RUNNING
:
console
.
info
(
'
--------CHANGE IN AUDIO STATE----------RUNNING--------------
'
);
console
.
info
(
'
Audio State is : Running
'
);
break
;
case
audio
.
AudioState
.
STATE_STOPPED
:
console
.
info
(
'
--------CHANGE IN AUDIO STATE----------STOPPED--------------
'
);
console
.
info
(
'
Audio State is : stopped
'
);
break
;
case
audio
.
AudioState
.
STATE_RELEASED
:
console
.
info
(
'
--------CHANGE IN AUDIO STATE----------RELEASED--------------
'
);
console
.
info
(
'
Audio State is : released
'
);
break
;
default
:
console
.
info
(
'
--------CHANGE IN AUDIO STATE----------INVALID--------------
'
);
console
.
info
(
'
Audio State is : invalid
'
);
break
;
}
});
```
3.
调用start()方法来启动/恢复采集任务。
2.
调用start()方法来启动/恢复采集任务。
启动完成后,采集器状态将变更为STATE_RUNNING,然后应用可以开始读取缓冲区。
```
js
await
audioCapturer
.
start
();
if
(
audioCapturer
.
state
==
audio
.
AudioState
.
STATE_RUNNING
)
{
console
.
info
(
'
AudioRecLog: Capturer started
'
);
}
else
{
console
.
info
(
'
AudioRecLog: Capturer start failed
'
);
}
```
4.
使用getBufferSize()方法获取要读取的最小缓冲区大小。
import
audio
from
'
@ohos.multimedia.audio
'
;
async
function
startCapturer
()
{
let
state
=
audioCapturer
.
state
;
// Capturer start时的状态应该是STATE_PREPARED、STATE_PAUSED和STATE_STOPPED之一.
if
(
state
!=
audio
.
AudioState
.
STATE_PREPARED
||
state
!=
audio
.
AudioState
.
STATE_PAUSED
||
state
!=
audio
.
AudioState
.
STATE_STOPPED
)
{
console
.
info
(
'
Capturer is not in a correct state to start
'
);
return
;
}
await
audioCapturer
.
start
();
```
js
var
bufferSize
=
await
audioCapturer
.
getBufferSize
();
console
.
info
(
'
AudioRecLog: buffer size:
'
+
bufferSize
);
let
state
=
audioCapturer
.
state
;
if
(
state
==
audio
.
AudioState
.
STATE_RUNNING
)
{
console
.
info
(
'
AudioRecLog: Capturer started
'
);
}
else
{
console
.
error
(
'
AudioRecLog: Capturer start failed
'
);
}
}
```
5
.
读取采集器的音频数据并将其转换为字节流。重复调用read()方法读取数据,直到应用准备停止采集。
3
.
读取采集器的音频数据并将其转换为字节流。重复调用read()方法读取数据,直到应用准备停止采集。
参考以下示例,将采集到的数据写入文件。
```
js
import
fileio
from
'
@ohos.fileio
'
;
let
state
=
audioCapturer
.
state
;
// 只有状态为STATE_RUNNING的时候才可以read.
if
(
state
!=
audio
.
AudioState
.
STATE_RUNNING
)
{
console
.
info
(
'
Capturer is not in a correct state to read
'
);
return
;
}
const
path
=
'
/data/data/.pulse_dir/capture_js.wav
'
;
const
path
=
'
/data/data/.pulse_dir/capture_js.wav
'
;
// 采集到的音频文件存储路径
let
fd
=
fileio
.
openSync
(
path
,
0o102
,
0o777
);
if
(
fd
!==
null
)
{
console
.
info
(
'
AudioRecLog: file fd created
'
);
...
...
@@ -126,38 +110,140 @@ AudioCapturer提供了用于获取原始音频文件的方法。开发者可以
console
.
info
(
'
AudioRecLog: file fd opened in append mode
'
);
}
var
numBuffersToCapture
=
150
;
let
numBuffersToCapture
=
150
;
// 循环写入150次
while
(
numBuffersToCapture
)
{
var
buffer
=
await
audioCapturer
.
read
(
bufferSize
,
true
);
let
buffer
=
await
audioCapturer
.
read
(
bufferSize
,
true
);
if
(
typeof
(
buffer
)
==
undefined
)
{
console
.
info
(
'
read buffer failed
'
);
console
.
info
(
'
AudioRecLog:
read buffer failed
'
);
}
else
{
var
number
=
fileio
.
writeSync
(
fd
,
buffer
);
console
.
info
(
'
AudioRecLog: data written:
'
+
number
);
let
number
=
fileio
.
writeSync
(
fd
,
buffer
);
console
.
info
(
`AudioRecLog: data written:
${
number
}
`
);
}
numBuffersToCapture
--
;
}
```
6
.
采集完成后,调用stop方法,停止录制。
4
.
采集完成后,调用stop方法,停止录制。
```
js
await
audioCapturer
.
stop
();
if
(
audioCapturer
.
state
==
audio
.
AudioState
.
STATE_STOPPED
)
{
console
.
info
(
'
AudioRecLog: Capturer stopped
'
);
}
else
{
console
.
info
(
'
AudioRecLog: Capturer stop failed
'
);
}
async
function
StopCapturer
()
{
let
state
=
audioCapturer
.
state
;
// 只有采集器状态为STATE_RUNNING或STATE_PAUSED的时候才可以停止
if
(
state
!=
audio
.
AudioState
.
STATE_RUNNING
&&
state
!=
audio
.
AudioState
.
STATE_PAUSED
)
{
console
.
info
(
'
AudioRecLog: Capturer is not running or paused
'
);
return
;
}
await
audioCapturer
.
stop
();
state
=
audioCapturer
.
state
;
if
(
state
==
audio
.
AudioState
.
STATE_STOPPED
)
{
console
.
info
(
'
AudioRecLog: Capturer stopped
'
);
}
else
{
console
.
error
(
'
AudioRecLog: Capturer stop failed
'
);
}
}
```
7
.
任务结束,调用release()方法释放相关资源。
5
.
任务结束,调用release()方法释放相关资源。
```
js
await
audioCapturer
.
release
();
if
(
audioCapturer
.
state
==
audio
.
AudioState
.
STATE_RELEASED
)
{
console
.
info
(
'
AudioRecLog: Capturer released
'
);
}
else
{
console
.
info
(
'
AudioRecLog: Capturer release failed
'
);
}
```
\ No newline at end of file
async
function
releaseCapturer
()
{
let
state
=
audioCapturer
.
state
;
// 采集器状态不是STATE_RELEASED或STATE_NEW状态,才能release
if
(
state
==
audio
.
AudioState
.
STATE_RELEASED
||
state
==
audio
.
AudioState
.
STATE_NEW
)
{
console
.
info
(
'
AudioRecLog: Capturer already released
'
);
return
;
}
await
audioCapturer
.
release
();
state
=
audioCapturer
.
state
;
if
(
state
==
audio
.
AudioState
.
STATE_RELEASED
)
{
console
.
info
(
'
AudioRecLog: Capturer released
'
);
}
else
{
console
.
info
(
'
AudioRecLog: Capturer release failed
'
);
}
}
```
6.
(可选)获取采集器相关信息
通过以下代码,可以获取采集器的相关信息。
```
js
// 获取当前采集器状态
let
state
=
audioCapturer
.
state
;
// 获取采集器信息
let
audioCapturerInfo
:
audio
.
AuduioCapturerInfo
=
await
audioCapturer
.
getCapturerInfo
();
// 获取音频流信息
let
audioStreamInfo
:
audio
.
AudioStreamInfo
=
await
audioCapturer
.
getStreamInfo
();
// 获取音频流ID
let
audioStreamId
:
number
=
await
audioCapturer
.
getAudioStreamId
();
// 获取纳秒形式的Unix时间戳
let
audioTime
:
number
=
await
audioCapturer
.
getAudioTime
();
// 获取合理的最小缓冲区大小
let
bufferSize
:
number
=
await
audioCapturer
.
getBuffersize
();
```
7.
(可选)使用on('markReach')方法订阅采集器标记到达事件,使用off('markReach')取消订阅事件。
注册markReach监听后,当采集器采集的帧数到达设定值时,会触发回调并返回设定的值。
```
js
audioCapturer
.
on
(
'
markReach
'
,
(
reachNumber
)
=>
{
console
.
info
(
'
Mark reach event Received
'
);
console
.
info
(
`The Capturer reached frame:
${
reachNumber
}
`
);
});
audioCapturer
.
off
(
'
markReach
'
);
// 取消markReach事件的订阅,后续将无法监听到“标记到达”事件
```
8.
(可选)使用on('periodReach')方法订阅采集器区间标记到达事件,使用off('periodReach')取消订阅事件。
注册periodReach监听后,**每当**采集器采集的帧数到达设定值时,会触发回调并返回设定的值。
```
js
audioCapturer
.
on
(
'
periodReach
'
,
(
reachNumber
)
=>
{
console
.
info
(
'
Period reach event Received
'
);
console
.
info
(
`In this period, the Capturer reached frame:
${
reachNumber
}
`
);
});
audioCapturer
.
off
(
'
periodReach
'
);
// 取消periodReach事件的订阅,后续将无法监听到“区间标记到达”事件
```
9.
如果应用需要在采集器状态更新时进行一些操作,可以订阅该事件,当采集器状态更新时,会受到一个包含有事件类型的回调。
```js
audioCapturer.on('stateChange', (state) => {
console.info(`AudioCapturerLog: Changed State to : ${state}`)
switch (state) {
case audio.AudioState.STATE_PREPARED:
console.info('--------CHANGE IN AUDIO STATE----------PREPARED--------------');
console.info('Audio State is : Prepared');
break;
case audio.AudioState.STATE_RUNNING:
console.info('--------CHANGE IN AUDIO STATE----------RUNNING--------------');
console.info('Audio State is : Running');
break;
case audio.AudioState.STATE_STOPPED:
console.info('--------CHANGE IN AUDIO STATE----------STOPPED--------------');
console.info('Audio State is : stopped');
break;
case audio.AudioState.STATE_RELEASED:
console.info('--------CHANGE IN AUDIO STATE----------RELEASED--------------');
console.info('Audio State is : released');
break;
default:
console.info('--------CHANGE IN AUDIO STATE----------INVALID--------------');
console.info('Audio State is : invalid');
break;
}
});
```
\ No newline at end of file
zh-cn/application-dev/media/audio-renderer.md
浏览文件 @
2d1006a6
...
...
@@ -25,32 +25,244 @@ AudioRenderer提供了渲染音频文件和控制播放的接口,开发者可
## 开发指导
详细API含义可参考:
[
音频管理API文档AudioRenderer
](
../reference/apis/js-apis-audio.md#audiorenderer8
)
1.
使用createAudioRenderer()创建一个AudioRenderer实例。
在audio
CapturerOptions中设置相关参数。该实例可用于音频渲染、控制和获取采集
状态,以及注册通知回调。
在audio
RendererOptions中设置相关参数。该实例可用于音频渲染、控制和获取渲染
状态,以及注册通知回调。
```
js
var
audioStreamInfo
=
{
samplingRate
:
audio
.
AudioSamplingRate
.
SAMPLE_RATE_44100
,
channels
:
audio
.
AudioChannel
.
CHANNEL_1
,
sampleFormat
:
audio
.
AudioSampleFormat
.
SAMPLE_FORMAT_S16LE
,
encodingType
:
audio
.
AudioEncodingType
.
ENCODING_TYPE_RAW
}
import
audio
from
'
@ohos.multimedia.audio
'
;
let
audioStreamInfo
=
{
samplingRate
:
audio
.
AudioSamplingRate
.
SAMPLE_RATE_44100
,
channels
:
audio
.
AudioChannel
.
CHANNEL_1
,
sampleFormat
:
audio
.
AudioSampleFormat
.
SAMPLE_FORMAT_S16LE
,
encodingType
:
audio
.
AudioEncodingType
.
ENCODING_TYPE_RAW
}
let
audioRendererInfo
=
{
content
:
audio
.
ContentType
.
CONTENT_TYPE_SPEECH
,
usage
:
audio
.
StreamUsage
.
STREAM_USAGE_VOICE_COMMUNICATION
,
rendererFlags
:
0
// 0是音频渲染器的扩展标志位,默认为0
}
let
audioRendererOptions
=
{
streamInfo
:
audioStreamInfo
,
rendererInfo
:
audioRendererInfo
}
let
audioRenderer
=
await
audio
.
createAudioRenderer
(
audioRendererOptions
);
console
.
log
(
"
Create audio renderer success.
"
);
```
var
audioRendererInfo
=
{
content
:
audio
.
ContentType
.
CONTENT_TYPE_SPEECH
,
usage
:
audio
.
StreamUsage
.
STREAM_USAGE_VOICE_COMMUNICATION
,
rendererFlags
:
1
}
2.
调用start()方法来启动/恢复播放任务。
```
js
async
function
startRenderer
()
{
let
state
=
audioRenderer
.
state
;
// Renderer start时的状态应该是STATE_PREPARED、STATE_PAUSED和STATE_STOPPED之一.
if
(
state
!=
audio
.
AudioState
.
STATE_PREPARED
&&
state
!=
audio
.
AudioState
.
STATE_PAUSED
&&
state
!=
audio
.
AudioState
.
STATE_STOPPED
)
{
console
.
info
(
'
Renderer is not in a correct state to start
'
);
return
;
}
await
audioRenderer
.
start
();
var
audioRendererOptions
=
{
streamInfo
:
audioStreamInfo
,
rendererInfo
:
audioRendererInfo
state
=
audioRenderer
.
state
;
if
(
state
==
audio
.
AudioState
.
STATE_RUNNING
)
{
console
.
info
(
'
Renderer started
'
);
}
else
{
console
.
error
(
'
Renderer start failed
'
);
}
}
```
启动完成后,渲染器状态将变更为STATE_RUNNING,然后应用可以开始读取缓冲区。
let
audioRenderer
=
await
audio
.
createAudioRenderer
(
audioRendererOptions
);
3.
调用write()方法向缓冲区写入数据。
将需要播放的音频数据读入缓冲区,重复调用write()方法写入。
```
js
import
fileio
from
'
@ohos.fileio
'
;
async
function
writeBuffer
(
buf
)
{
let
state
=
audioRenderer
.
state
;
// 写入数据时,渲染器的状态必须为STATE_RUNNING
if
(
state
!=
audio
.
AudioState
.
STATE_RUNNING
)
{
console
.
error
(
'
Renderer is not running, do not write
'
);
this
.
isPlay
=
false
;
return
;
}
let
writtenbytes
=
await
audioRenderer
.
write
(
buf
);
console
.
info
(
`Actual written bytes:
${
writtenbytes
}
`
);
if
(
writtenbytes
<
0
)
{
console
.
error
(
'
Write buffer failed. check the state of renderer
'
);
}
}
// 此处是渲染器的合理的最小缓冲区大小(也可以选择其它大小的缓冲区)
const
bufferSize
=
await
audioRenderer
.
getBufferSize
();
const
path
=
'
/data/file_example_WAV_2MG.wav
'
;
// 需要渲染的音乐文件
let
ss
=
fileio
.
createStreamSync
(
path
,
'
r
'
);
const
totalSize
=
fileio
.
statSync
(
path
).
size
;
// 音乐文件大小
let
discardHeader
=
new
ArrayBuffer
(
bufferSize
);
ss
.
readSync
(
discardHeader
);
let
rlen
=
0
;
rlen
+=
bufferSize
;
let
id
=
setInterval
(()
=>
{
if
(
this
.
isRelease
)
{
// 如果渲染器状态为release,停止渲染
ss
.
closeSync
();
stopRenderer
();
clearInterval
(
id
);
}
if
(
this
.
isPlay
)
{
if
(
rlen
>=
totalSize
)
{
// 如果音频文件已经被读取完,停止渲染
ss
.
closeSync
();
stopRenderer
();
clearInterval
(
id
);
}
let
buf
=
new
ArrayBuffer
(
bufferSize
);
rlen
+=
ss
.
readSync
(
buf
);
console
.
info
(
`Total bytes read from file:
${
rlen
}
`
);
writeBuffer
(
buf
);
}
else
{
console
.
info
(
'
check after next interval
'
);
}
},
30
);
// 定时器区间根据音频格式设置,单位为毫秒
```
2.
使用on('interrupt')方法订阅音频中断事件。
4.
(可选)调用pause()方法或stop()方法暂停/停止渲染音频数据。
```
js
async
function
pauseRenderer
()
{
let
state
=
audioRenderer
.
state
;
// 只有渲染器状态为STATE_RUNNING的时候才能暂停
if
(
state
!=
audio
.
AudioState
.
STATE_RUNNING
)
{
console
.
info
(
'
Renderer is not running
'
);
return
;
}
await
audioRenderer
.
pause
();
state
=
audioRenderer
.
state
;
if
(
state
==
audio
.
AudioState
.
STATE_PAUSED
)
{
console
.
info
(
'
Renderer paused
'
);
}
else
{
console
.
error
(
'
Renderer pause failed
'
);
}
}
async
function
stopRenderer
()
{
let
state
=
audioRenderer
.
state
;
// 只有渲染器状态为STATE_RUNNING或STATE_PAUSED的时候才可以停止
if
(
state
!=
audio
.
AudioState
.
STATE_RUNNING
&&
state
!=
audio
.
AudioState
.
STATE_PAUSED
)
{
console
.
info
(
'
Renderer is not running or paused
'
);
return
;
}
await
audioRenderer
.
stop
();
state
=
audioRenderer
.
state
;
if
(
state
==
audio
.
AudioState
.
STATE_STOPPED
)
{
console
.
info
(
'
Renderer stopped
'
);
}
else
{
console
.
error
(
'
Renderer stop failed
'
);
}
}
```
5.
(可选)调用drain()方法清空缓冲区。
```
js
async
function
drainRenderer
()
{
let
state
=
audioRenderer
.
state
;
// 只有渲染器状态为STATE_RUNNING的时候才能使用drain()
if
(
state
!=
audio
.
AudioState
.
STATE_RUNNING
)
{
console
.
info
(
'
Renderer is not running
'
);
return
;
}
await
audioRenderer
.
drain
();
state
=
audioRenderer
.
state
;
}
```
6.
任务完成,调用release()方法释放相关资源。
AudioRenderer会使用大量的系统资源,所以请确保完成相关任务后,进行资源释放。
```
js
async
function
releaseRenderer
()
{
let
state
=
audioRenderer
.
state
;
// 渲染器状态不是STATE_RELEASED或STATE_NEW状态,才能release
if
(
state
==
audio
.
AudioState
.
STATE_RELEASED
||
state
==
audio
.
AudioState
.
STATE_NEW
)
{
console
.
info
(
'
Renderer already released
'
);
return
;
}
await
audioRenderer
.
release
();
state
=
audioRenderer
.
state
;
if
(
state
==
audio
.
AudioState
.
STATE_RELEASED
)
{
console
.
info
(
'
Renderer released
'
);
}
else
{
console
.
info
(
'
Renderer release failed
'
);
}
}
```
7.
(可选)获取渲染器相关信息
通过以下代码,可以获取渲染器的相关信息。
```
js
// 获取当前渲染器状态
let
state
=
audioRenderer
.
state
;
// 获取渲染器信息
let
audioRendererInfo
:
audio
.
AudioRendererInfo
=
await
audioRenderer
.
getRendererInfo
();
// 获取音频流信息
let
audioStreamInfo
:
audio
.
AudioStreamInfo
=
await
audioRenderer
.
getStreamInfo
();
// 获取音频流ID
let
audioStreamId
:
number
=
await
audioRenderer
.
getAudioStreamId
();
// 获取纳秒形式的Unix时间戳
let
audioTime
:
number
=
await
audioRenderer
.
getAudioTime
();
// 获取合理的最小缓冲区大小
let
bufferSize
:
number
=
await
audioRenderer
.
getBuffersize
();
// 获取渲染速率
let
renderRate
:
audio
.
AudioRendererRate
=
await
audioRenderer
.
getRenderRate
();
```
8.
(可选)设置渲染器相关信息
通过以下代码,可以设置渲染器的相关信息。
```
js
// 设置渲染速率为正常速度
let
renderRate
:
audio
.
AudioRendererRate
=
audio
.
AudioRendererRate
.
RENDER_RATE_NORMAL
;
await
audioRenderer
.
setRenderRate
(
renderRate
);
// 设置渲染器音频中断模式为SHARE_MODE
let
interruptMode
:
audio
.
InterruptMode
=
audio
.
InterruptMode
.
SHARE_MODE
;
await
audioRenderer
.
setInterruptMode
(
interruptMode
);
// 设置一个流的音量为10
let
volume
:
number
=
10
;
await
audioRenderer
.
setVolume
(
volume
);
```
9.
(可选)使用on('audioInterrupt')方法订阅渲染器音频中断事件,使用off('audioInterrupt')取消订阅事件。
当优先级更高或相等的Stream-B请求激活并使用输出设备时,Stream-A被中断。
...
...
@@ -59,42 +271,36 @@ AudioRenderer提供了渲染音频文件和控制播放的接口,开发者可
在音频中断的情况下,应用可能会碰到音频数据写入失败的问题。所以建议不感知、不处理中断的应用在写入音频数据前,使用audioRenderer.state检查播放器状态。而订阅音频中断事件,可以获取到更多详细信息,具体可参考
[
InterruptEvent
](
../reference/apis/js-apis-audio.md#interruptevent9
)
。
```
js
audioRenderer
.
on
(
'
i
nterrupt
'
,
(
interruptEvent
)
=>
{
audioRenderer
.
on
(
'
audioI
nterrupt
'
,
(
interruptEvent
)
=>
{
console
.
info
(
'
InterruptEvent Received
'
);
console
.
info
(
'
InterruptType:
'
+
interruptEvent
.
eventType
);
console
.
info
(
'
InterruptForceType:
'
+
interruptEvent
.
forceType
);
console
.
info
(
'
AInterruptHint:
'
+
interruptEvent
.
hintType
);
console
.
info
(
`InterruptType:
${
interruptEvent
.
eventType
}
`
);
console
.
info
(
`InterruptForceType:
${
interruptEvent
.
forceType
}
`
);
console
.
info
(
`AInterruptHint:
${
interruptEvent
.
hintType
}
`
);
if
(
interruptEvent
.
forceType
==
audio
.
InterruptForceType
.
INTERRUPT_FORCE
)
{
switch
(
interruptEvent
.
hintType
)
{
// Force Pause: Action was taken by framework.
// Halt the write calls to avoid data loss.
// 音频框架发起的强制暂停操作,为防止数据丢失,此时应该停止数据的写操作
case
audio
.
InterruptHint
.
INTERRUPT_HINT_PAUSE
:
isPlay
=
false
;
break
;
// Force Stop: Action was taken by framework.
// Halt the write calls to avoid data loss.
// 音频框架发起的强制停止操作,为防止数据丢失,此时应该停止数据的写操作
case
audio
.
InterruptHint
.
INTERRUPT_HINT_STOP
:
isPlay
=
false
;
break
;
// Force Duck: Action was taken by framework,
// just notifying the app that volume has been reduced.
// 音频框架发起的强制降低音量操作
case
audio
.
InterruptHint
.
INTERRUPT_HINT_DUCK
:
break
;
// Force Unduck: Action was taken by framework,
// just notifying the app that volume has been restored.
// 音频框架发起的恢复音量操作
case
audio
.
InterruptHint
.
INTERRUPT_HINT_UNDUCK
:
break
;
}
}
else
if
(
interruptEvent
.
forceType
==
audio
.
InterruptForceType
.
INTERRUPT_SHARE
)
{
switch
(
interruptEvent
.
hintType
)
{
// Share Resume: Action is to be taken by App.
// Resume the force paused stream if required.
// 提醒App开始渲染
case
audio
.
InterruptHint
.
INTERRUPT_HINT_RESUME
:
startRenderer
();
break
;
// Share Pause: Stream has been interrupted,
// It can choose to pause or play concurrently.
// 提醒App音频流被中断,由App自主决定是否继续(此处选择暂停)
case
audio
.
InterruptHint
.
INTERRUPT_HINT_PAUSE
:
isPlay
=
false
;
pauseRenderer
();
...
...
@@ -102,137 +308,63 @@ AudioRenderer提供了渲染音频文件和控制播放的接口,开发者可
}
}
});
```
3.
调用start()方法来启动/恢复播放任务。
启动完成后,渲染器状态将变更为STATE_RUNNING,然后应用可以开始读取缓冲区。
```
js
async
function
startRenderer
()
{
var
state
=
audioRenderer
.
state
;
// state should be prepared, paused or stopped.
if
(
state
!=
audio
.
AudioState
.
STATE_PREPARED
||
state
!=
audio
.
AudioState
.
STATE_PAUSED
||
state
!=
audio
.
AudioState
.
STATE_STOPPED
)
{
console
.
info
(
'
Renderer is not in a correct state to start
'
);
return
;
}
await
audioRenderer
.
start
();
state
=
audioRenderer
.
state
;
if
(
state
==
audio
.
AudioState
.
STATE_RUNNING
)
{
console
.
info
(
'
Renderer started
'
);
}
else
{
console
.
error
(
'
Renderer start failed
'
);
}
}
audioRenderer
.
off
(
'
audioInterrupt
'
);
// 取消音频中断事件的订阅,后续将无法监听到音频中断事件
```
4.
调用write()方法向缓冲区写入数据。
将需要播放的音频数据读入缓冲区,重复调用write()方法写入。
```
js
async
function
writeBuffer
(
buf
)
{
var
state
=
audioRenderer
.
state
;
if
(
state
!=
audio
.
AudioState
.
STATE_RUNNING
)
{
console
.
error
(
'
Renderer is not running, do not write
'
);
isPlay
=
false
;
return
;
}
let
writtenbytes
=
await
audioRenderer
.
write
(
buf
);
10.
(可选)使用on('markReach')方法订阅渲染器标记到达事件,使用off('markReach')取消订阅事件。
console
.
info
(
'
Actual written bytes:
'
+
writtenbytes
);
if
(
writtenbytes
<
0
)
{
console
.
error
(
'
Write buffer failed. check the state of renderer
'
);
}
}
注册markReach监听后,当渲染器渲染的帧数到达设定值时,会触发回调并返回设定的值。
// Reasonable minimum buffer size for renderer. However, the renderer can accept other read sizes as well.
const
bufferSize
=
await
audioRenderer
.
getBufferSize
();
const
path
=
'
/data/file_example_WAV_2MG.wav
'
;
let
ss
=
fileio
.
createStreamSync
(
path
,
'
r
'
);
const
totalSize
=
2146166
;
// file_example_WAV_2MG.wav
let
rlen
=
0
;
let
discardHeader
=
new
ArrayBuffer
(
44
);
ss
.
readSync
(
discardHeader
);
rlen
+=
44
;
var
id
=
setInterval
(()
=>
{
if
(
isPlay
||
isRelease
)
{
if
(
rlen
>=
totalSize
||
isRelease
)
{
ss
.
closeSync
();
stopRenderer
();
clearInterval
(
id
);
}
let
buf
=
new
ArrayBuffer
(
bufferSize
);
rlen
+=
ss
.
readSync
(
buf
);
console
.
info
(
'
Total bytes read from file:
'
+
rlen
);
writeBuffer
(
buf
);
}
else
{
console
.
info
(
'
check after next interval
'
);
}
}
,
30
);
// interval to be set based on audio file format
```
```
js
audioRenderer
.
on
(
'
markReach
'
,
(
reachNumber
)
=>
{
console
.
info
(
'
Mark reach event Received
'
);
console
.
info
(
`The renderer reached frame:
${
reachNumber
}
`
);
});
5.
(可选)调用pause()方法或stop()方法暂停/停止渲染音频数据。
audioRenderer
.
off
(
'
markReach
'
);
// 取消markReach事件的订阅,后续将无法监听到“标记到达”事件
```
```
js
async
function
pauseRenderer
()
{
var
state
=
audioRenderer
.
state
;
if
(
state
!=
audio
.
AudioState
.
STATE_RUNNING
)
{
console
.
info
(
'
Renderer is not running
'
);
return
;
}
await
audioRenderer
.
pause
();
state
=
audioRenderer
.
state
;
if
(
state
==
audio
.
AudioState
.
STATE_PAUSED
)
{
console
.
info
(
'
Renderer paused
'
);
}
else
{
console
.
error
(
'
Renderer pause failed
'
);
}
}
async
function
stopRenderer
()
{
var
state
=
audioRenderer
.
state
;
if
(
state
!=
audio
.
AudioState
.
STATE_RUNNING
||
state
!=
audio
.
AudioState
.
STATE_PAUSED
)
{
console
.
info
(
'
Renderer is not running or paused
'
);
return
;
}
await
audioRenderer
.
stop
();
state
=
audioRenderer
.
state
;
if
(
state
==
audio
.
AudioState
.
STATE_STOPPED
)
{
console
.
info
(
'
Renderer stopped
'
);
}
else
{
console
.
error
(
'
Renderer stop failed
'
);
}
}
```
11.
(可选)使用on('periodReach')方法订阅渲染器区间标记到达事件,使用off('periodReach')取消订阅事件。
6.
任务完成,调用release()方法释放相关资源。
注册periodReach监听后,**每当**渲染器渲染的帧数到达设定值时,会触发回调并返回设定的值。
```
js
audioRenderer
.
on
(
'
periodReach
'
,
(
reachNumber
)
=>
{
console
.
info
(
'
Period reach event Received
'
);
console
.
info
(
`In this period, the renderer reached frame:
${
reachNumber
}
`
);
});
AudioRenderer会使用大量的系统资源,所以请确保完成相关任务后,进行资源释放。
audioRenderer
.
off
(
'
periodReach
'
);
// 取消periodReach事件的订阅,后续将无法监听到“区间标记到达”事件
```
```
js
async
function
releaseRenderer
()
{
if
(
state_
==
RELEASED
||
state_
==
NEW
)
{
console
.
info
(
'
Resourced already released
'
);
return
;
}
12.
(可选)使用on('stateChange')方法订阅渲染器音频状态变化事件。
await
audioRenderer
.
release
();
注册stateChange监听后,当渲染器的状态发生改变时,会触发回调并返回当前渲染器的状态。
```
js
audioRenderer
.
on
(
'
stateChange
'
,
(
audioState
)
=>
{
console
.
info
(
'
State change event Received
'
);
console
.
info
(
`Current renderer state is:
${
audioState
}
`
);
});
```
state
=
audioRenderer
.
state
;
if
(
state
==
STATE_RELEASED
)
{
console
.
info
(
'
Renderer released
'
);
}
else
{
console
.
info
(
'
Renderer release failed
'
);
}
13.
(可选)对on()方法的异常处理。
}
```
\ No newline at end of file
在使用on()方法时,如果传入的字符串错误或传入的参数类型错误,程序会抛出异常,需要用try catch来捕获。
```
js
try
{
audioRenderer
.
on
(
'
invalidInput
'
,
()
=>
{
// 字符串不匹配
})
}
catch
(
err
)
{
console
.
info
(
`Call on function error,
${
err
}
`
);
// 程序抛出401异常
}
try
{
audioRenderer
.
on
(
1
,
()
=>
{
// 入参类型错误
})
}
catch
(
err
)
{
console
.
info
(
`Call on function error,
${
err
}
`
);
// 程序抛出6800101异常
}
```
\ No newline at end of file
zh-cn/application-dev/media/audio-routing-manager.md
0 → 100644
浏览文件 @
2d1006a6
# 路由、设备管理开发指导
## 简介
AudioRoutingManager提供了音频路由、设备管理的方法。开发者可以通过本指导了解应用如何通过AudioRoutingManager获取当前工作的输入、输出音频设备,监听音频设备的连接状态变化,激活通信设备等。
## 运作机制
该模块提供了路由、设备管理模块常用接口
**图1**
音量管理常用接口

**说明:**
AudioRoutingManager主要接口有:获取设备列表信息、监听与取消监听设备连接状态、激活通信设备、查询通信设备激活状态。更多介绍请参考
[
API参考文档
](
../reference/apis/js-apis-audio.md
)
。
## 开发指导
详细API含义可参考:
[
音频路由、设备管理API文档AudioRoutingManager
](
../reference/apis/js-apis-audio.md#audioroutingmanager9
)
1.
创建AudioRoutingManager实例。
在使用AudioRoutingManager的API前,需要使用getRoutingManager创建一个AudioRoutingManager实例。
```
js
import
audio
from
'
@ohos.multimedia.audio
'
;
async
loadAudioRoutingManager
()
{
var
audioRoutingManager
=
await
audio
.
getAudioManager
().
getRoutingManager
();
console
.
info
(
'
audioRoutingManager------create-------success.
'
);
}
```
2.
(可选)获取设备列表信息、监听设备链接状态变化。
如果开发者需要获取设备列表信息(输入、输出、分布式输入、分布式输出等),或者监听音频设备的链接状态变化时,可参考并调用以下接口。
```
js
import
audio
from
'
@ohos.multimedia.audio
'
;
//创建AudioRoutingManager实例
async
loadAudioRoutingManager
()
{
var
audioRoutingManager
=
await
audio
.
getAudioManager
().
getRoutingManager
();
console
.
info
(
'
audioRoutingManager------create-------success.
'
);
}
//获取全部音频设备信息(开发者可以根据自身需要填入适当的DeviceFlag)
async
getDevices
()
{
await
loadAudioRoutingManager
();
await
audioRoutingManager
.
getDevices
(
audio
.
DeviceFlag
.
ALL_DEVICES_FLAG
).
then
((
data
)
=>
{
console
.
info
(
`getDevices success and data is:
${
JSON
.
stringify
(
data
)}
.`
);
});
}
//监听音频设备状态变化
async
onDeviceChange
()
{
await
loadAudioRoutingManager
();
await
audioRoutingManager
.
on
(
'
deviceChange
'
,
audio
.
DeviceFlag
.
ALL_DEVICES_FLAG
,
(
deviceChanged
)
=>
{
console
.
info
(
'
on device change type :
'
+
deviceChanged
.
type
);
console
.
info
(
'
on device descriptor size :
'
+
deviceChanged
.
deviceDescriptors
.
length
);
console
.
info
(
'
on device change descriptor :
'
+
deviceChanged
.
deviceDescriptors
[
0
].
deviceRole
);
console
.
info
(
'
on device change descriptor :
'
+
deviceChanged
.
deviceDescriptors
[
0
].
deviceType
);
});
}
//取消监听音频设备状态变化
async
offDeviceChange
()
{
await
loadAudioRoutingManager
();
await
audioRoutingManager
.
off
(
'
deviceChange
'
,
(
deviceChanged
)
=>
{
console
.
info
(
'
off device change type :
'
+
deviceChanged
.
type
);
console
.
info
(
'
off device descriptor size :
'
+
deviceChanged
.
deviceDescriptors
.
length
);
console
.
info
(
'
off device change descriptor :
'
+
deviceChanged
.
deviceDescriptors
[
0
].
deviceRole
);
console
.
info
(
'
off device change descriptor :
'
+
deviceChanged
.
deviceDescriptors
[
0
].
deviceType
);
});
}
//综合调用:先查询所有设备,设置监听,然后开发者手动变更设备连接(例如有线耳机),再次查询所有设备,最后取消设备状态变化的监听。
async
test
(){
await
getDevices
();
await
onDeviceChange
()();
//开发者手动断开/连接设备
await
getDevices
();
await
offDeviceChange
();
}
```
3.
(可选)设置通信设备激活并查询激活状态。
```
js
import
audio
from
'
@ohos.multimedia.audio
'
;
//创建AudioRoutingManager实例
async
loadAudioRoutingManager
()
{
var
audioRoutingManager
=
await
audio
.
getAudioManager
().
getRoutingManager
();
console
.
info
(
'
audioRoutingManager------create-------success.
'
);
}
//设置通信设备激活状态
async
setCommunicationDevice
()
{
await
loadAudioRoutingManager
();
await
audioRoutingManager
.
setCommunicationDevice
(
audio
.
CommunicationDeviceType
.
SPEAKER
,
true
).
then
(()
=>
{
console
.
info
(
'
setCommunicationDevice true is success.
'
);
});
}
//查询通信设备激活状态
async
isCommunicationDeviceActive
()
{
await
loadAudioRoutingManager
();
await
audioRoutingManager
.
isCommunicationDeviceActive
(
audio
.
CommunicationDeviceType
.
SPEAKER
).
then
((
value
)
=>
{
console
.
info
(
`CommunicationDevice state is:
${
value
}
.`
);
});
}
//综合调用:先设置设备激活,然后查询设备状态。
async
test
(){
await
setCommunicationDevice
();
await
isCommunicationDeviceActive
();
}
```
zh-cn/application-dev/media/audio-volume-manager.md
0 → 100644
浏览文件 @
2d1006a6
# 音量管理开发指导
## 简介
AudioVolumeManager提供了音量管理的方法。开发者可以通过本指导了解应用如何通过AudioVolumeManager获取指定流音量信息、监听铃声模式变化、设置麦克风静音等。
## 运作机制
该模块提供了音量管理模块常用接口
**图1**
音量管理常用接口

**说明:**
AudioVolumeManager包含音量变化监听处理和音频音量组管理相关(AudioVolumeGroupManager),开发者调用AudioVolumeGroupManager的相关方法,需要先调用getVolumeGroupManager方法创建AudioVolumeGroupManager实例,从而调用对应的接口实现相应的功能,主要接口有:获取指定流的音量、设置麦克风静音、监听麦克风状态变化等。更多介绍请参考
[
API参考文档
](
../reference/apis/js-apis-audio.md
)
。
## 约束与限制
开发者在进行麦克风管理开发前,需要先对所开发的应用配置麦克风权限(ohos.permission.MICROPHONE),如果要设置麦克风状态,则需要配置音频管理配置权限(ohos.permission.MANAGE_AUDIO_CONFIG),需注意该权限为系统级别权限。权限配置相关内容可参考:
[
访问控制授权申请指导
](
../security/accesstoken-guidelines.md
)
## 开发指导
详细API含义可参考:
[
音量管理API文档AudioVolumeManager
](
../reference/apis/js-apis-audio.md#audiovolumemanager9
)
1.
创建AudioVolumeGroupManager实例。
在使用AudioVolumeGroupManager的API前,需要使用getVolumeGroupManager创建一个AudioStreamManager实例。
```
js
import
audio
from
'
@ohos.multimedia.audio
'
;
async
loadVolumeGroupManager
()
{
const
groupid
=
audio
.
DEFAULT_VOLUME_GROUP_ID
;
var
audioVolumeGroupManager
=
await
audio
.
getAudioManager
().
getVolumeManager
().
getVolumeGroupManager
(
groupid
);
console
.
error
(
'
audioVolumeGroupManager create success.
'
);
}
```
2.
(可选)获取音量信息、铃声模式。
如果开发者需要获取指定音频流的音量信息(铃声、语音电话、媒体、语音助手等),或者获取当前设备是静音、震动、响铃模式,可参考并调用以下接口。更多事件请参考
[
API参考文档
](
../reference/apis/js-apis-audio.md
)
。
```
js
import
audio
from
'
@ohos.multimedia.audio
'
;
async
loadVolumeGroupManager
()
{
const
groupid
=
audio
.
DEFAULT_VOLUME_GROUP_ID
;
var
audioVolumeGroupManager
=
await
audio
.
getAudioManager
().
getVolumeManager
().
getVolumeGroupManager
(
groupid
);
console
.
info
(
'
audioVolumeGroupManager create success.
'
);
}
//获取指定流的当前音量(范围为0 ~ 15)
async
getVolume
()
{
await
loadVolumeGroupManager
();
await
audioVolumeGroupManager
.
getVolume
(
audio
.
AudioVolumeType
.
MEDIA
).
then
((
value
)
=>
{
console
.
info
(
`getVolume success and volume is:
${
value
}
.`
);
});
}
//获取指定流的最小音量
async
getMinVolume
()
{
await
loadVolumeGroupManager
();
await
audioVolumeGroupManager
.
getMinVolume
(
audio
.
AudioVolumeType
.
MEDIA
).
then
((
value
)
=>
{
console
.
info
(
`getMinVolume success and volume is:
${
value
}
.`
);
});
}
//获取指定流的最大音量
async
getMaxVolume
()
{
await
loadVolumeGroupManager
();
await
audioVolumeGroupManager
.
getMaxVolume
(
audio
.
AudioVolumeType
.
MEDIA
).
then
((
value
)
=>
{
console
.
info
(
`getMaxVolume success and volume is:
${
value
}
.`
);
});
}
//获取当前铃声模式: 静音(0)| 震动(1) | 响铃(2)
async
getRingerMode
()
{
await
loadVolumeGroupManager
();
await
audioVolumeGroupManager
.
getRingerMode
().
then
((
value
)
=>
{
console
.
info
(
`getRingerMode success and RingerMode is:
${
value
}
.`
);
});
}
```
3.
(可选)查询、设置、监听麦克风状态。
如果开发者需要获取、设置麦克风状态,或者监听麦克风状态变化等信息,可参考并调用以下接口。
```
js
import
audio
from
'
@ohos.multimedia.audio
'
;
async
loadVolumeGroupManager
()
{
const
groupid
=
audio
.
DEFAULT_VOLUME_GROUP_ID
;
var
audioVolumeGroupManager
=
await
audio
.
getAudioManager
().
getVolumeManager
().
getVolumeGroupManager
(
groupid
);
console
.
info
(
'
audioVolumeGroupManager create success.
'
);
}
async
on
()
{
//监听麦克风状态变化
await
loadVolumeGroupManager
();
await
audioVolumeGroupManager
.
audioVolumeGroupManager
.
on
(
'
micStateChange
'
,
(
micStateChange
)
=>
{
console
.
info
(
`Current microphone status is:
${
micStateChange
.
mute
}
`
);
});
}
async
isMicrophoneMute
()
{
//查询麦克风是否静音
await
audioVolumeGroupManager
.
audioVolumeGroupManager
.
isMicrophoneMute
().
then
((
value
)
=>
{
console
.
info
(
`isMicrophoneMute is:
${
value
}
.`
);
});
}
async
setMicrophoneMuteTrue
()
{
//设置麦克风静音
await
loadVolumeGroupManager
();
await
audioVolumeGroupManager
.
audioVolumeGroupManager
.
setMicrophoneMute
(
true
).
then
(()
=>
{
console
.
info
(
'
setMicrophoneMute to mute.
'
);
});
}
async
setMicrophoneMuteFalse
()
{
//取消麦克风静音
await
loadVolumeGroupManager
();
await
audioVolumeGroupManager
.
audioVolumeGroupManager
.
setMicrophoneMute
(
false
).
then
(()
=>
{
console
.
info
(
'
setMicrophoneMute to not mute.
'
);
});
}
async
test
(){
//综合调用:先设置监听,然后查询麦克风状态,设置麦克风静音后再查询状态,最后取消麦克风静音。
await
on
();
await
isMicrophoneMute
();
await
setMicrophoneMuteTrue
();
await
isMicrophoneMute
();
await
setMicrophoneMuteFalse
();
}
```
zh-cn/application-dev/media/figures/zh-ch_image_audio_routing_manager.png
0 → 100644
浏览文件 @
2d1006a6
47.4 KB
zh-cn/application-dev/media/figures/zh-ch_image_audio_volume_manager.png
0 → 100644
浏览文件 @
2d1006a6
77.4 KB
zh-cn/application-dev/reference/apis/js-apis-audio.md
浏览文件 @
2d1006a6
...
...
@@ -3916,7 +3916,7 @@ audioRenderer.off('markReach');
on(type: "periodReach", frame: number, callback: Callback
<
number
>
): void
订阅到达标记的事件。 当渲染的帧数达到 frame 参数的值时,
回调被循环调用
。
订阅到达标记的事件。 当渲染的帧数达到 frame 参数的值时,
触发回调并返回设定的值
。
**系统能力:**
SystemCapability.Multimedia.Audio.Renderer
...
...
@@ -4558,7 +4558,7 @@ audioCapturer.off('markReach');
on(type: "periodReach", frame: number, callback: Callback
<
number
>
): void
订阅到达标记的事件。 当采集的帧数达到 frame 参数的值时,
回调被循环调用
。
订阅到达标记的事件。 当采集的帧数达到 frame 参数的值时,
触发回调并返回设定的值
。
**系统能力:**
SystemCapability.Multimedia.Audio.Capturer
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录