Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
s920243400
PaddleDetection
提交
53fa7cb9
P
PaddleDetection
项目概览
s920243400
/
PaddleDetection
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleDetection
通知
2
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleDetection
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
53fa7cb9
编写于
3月 30, 2018
作者:
Y
Yu Yang
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Add local cache of double buffer reader
上级
95658767
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
14 addition
and
8 deletion
+14
-8
paddle/fluid/operators/reader/create_double_buffer_reader_op.cc
.../fluid/operators/reader/create_double_buffer_reader_op.cc
+14
-8
未找到文件。
paddle/fluid/operators/reader/create_double_buffer_reader_op.cc
浏览文件 @
53fa7cb9
...
@@ -128,9 +128,6 @@ void DoubleBufferReader::ReadNext(std::vector<framework::LoDTensor>* out) {
...
@@ -128,9 +128,6 @@ void DoubleBufferReader::ReadNext(std::vector<framework::LoDTensor>* out) {
PADDLE_THROW
(
"There is no next data!"
);
PADDLE_THROW
(
"There is no next data!"
);
}
}
if
(
local_buffer_
.
payloads_
.
empty
())
{
buffer_
->
Receive
(
&
local_buffer_
);
}
*
out
=
local_buffer_
.
payloads_
;
*
out
=
local_buffer_
.
payloads_
;
local_buffer_
.
payloads_
.
clear
();
local_buffer_
.
payloads_
.
clear
();
if
(
local_buffer_
.
ctx_
)
{
if
(
local_buffer_
.
ctx_
)
{
...
@@ -149,21 +146,30 @@ void DoubleBufferReader::ReInit() {
...
@@ -149,21 +146,30 @@ void DoubleBufferReader::ReInit() {
void
DoubleBufferReader
::
PrefetchThreadFunc
()
{
void
DoubleBufferReader
::
PrefetchThreadFunc
()
{
VLOG
(
5
)
<<
"A new prefetch thread starts."
;
VLOG
(
5
)
<<
"A new prefetch thread starts."
;
size_t
gpu_ctx_offset
=
0
;
size_t
gpu_ctx_offset
=
0
;
std
::
vector
<
std
::
vector
<
framework
::
LoDTensor
>>
cpu_tensor_cache
(
4
);
std
::
vector
<
std
::
vector
<
framework
::
LoDTensor
>>
gpu_tensor_cache
(
4
);
size_t
tensor_cache_id
=
0
;
while
(
reader_
->
HasNext
())
{
while
(
reader_
->
HasNext
())
{
Item
batch
;
Item
batch
;
reader_
->
ReadNext
(
&
batch
.
payloads_
);
reader_
->
ReadNext
(
&
batch
.
payloads_
);
if
(
platform
::
is_gpu_place
(
place_
))
{
if
(
platform
::
is_gpu_place
(
place_
))
{
std
::
vector
<
framework
::
LoDTensor
>
gpu_batch
;
tensor_cache_id
%=
4
;
auto
&
gpu_batch
=
gpu_tensor_cache
[
tensor_cache_id
];
auto
&
cpu_batch
=
cpu_tensor_cache
[
tensor_cache_id
];
cpu_batch
=
batch
.
payloads_
;
++
tensor_cache_id
;
auto
&
gpu_ctx
=
this
->
ctxs_
[
gpu_ctx_offset
++
];
auto
&
gpu_ctx
=
this
->
ctxs_
[
gpu_ctx_offset
++
];
gpu_ctx_offset
%=
this
->
ctxs_
.
size
();
gpu_ctx_offset
%=
this
->
ctxs_
.
size
();
gpu_batch
.
resize
(
batch
.
payloads_
.
size
());
gpu_batch
.
resize
(
batch
.
payloads_
.
size
());
for
(
size_t
i
=
0
;
i
<
batch
.
payloads_
.
size
();
++
i
)
{
for
(
size_t
i
=
0
;
i
<
cpu_batch
.
size
();
++
i
)
{
framework
::
TensorCopy
(
batch
.
payloads_
[
i
],
place_
,
*
gpu_ctx
,
framework
::
TensorCopy
(
cpu_batch
[
i
],
place_
,
*
gpu_ctx
,
&
gpu_batch
[
i
]);
&
gpu_batch
[
i
]);
gpu_batch
[
i
].
set_lod
(
batch
.
payloads_
[
i
].
lod
());
gpu_batch
[
i
].
set_lod
(
batch
.
payloads_
[
i
].
lod
());
}
}
batch
.
ctx_
=
gpu_ctx
.
get
();
batch
.
ctx_
=
gpu_ctx
.
get
();
std
::
swap
(
gpu_batch
,
batch
.
payloads_
)
;
batch
.
payloads_
=
gpu_batch
;
}
}
try
{
try
{
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录