Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Crayon鑫
Paddle
提交
8f80f5bc
P
Paddle
项目概览
Crayon鑫
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
8f80f5bc
编写于
8月 16, 2017
作者:
L
liaogang
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
FIX: Release CPU/GPU memory via deleter
上级
0f868819
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
41 addition
and
18 deletion
+41
-18
paddle/memory/memory.cc
paddle/memory/memory.cc
+41
-18
未找到文件。
paddle/memory/memory.cc
浏览文件 @
8f80f5bc
...
...
@@ -16,19 +16,31 @@ limitations under the License. */
#include "paddle/memory/detail/buddy_allocator.h"
#include "paddle/memory/detail/system_allocator.h"
#include <cstring> // for memcpy
#include <algorithm> // for transfrom
#include <cstring> // for memcpy
#include <mutex> // for call_once
#include "glog/logging.h"
namespace
paddle
{
namespace
memory
{
detail
::
BuddyAllocator
*
GetCPUBuddyAllocator
()
{
static
detail
::
BuddyAllocator
*
a
=
nullptr
;
if
(
a
==
nullptr
)
{
a
=
new
detail
::
BuddyAllocator
(
new
detail
::
CPUAllocator
,
platform
::
CpuMinChunkSize
(),
platform
::
CpuMaxChunkSize
());
}
return
a
;
using
BuddyAllocator
=
detail
::
BuddyAllocator
;
std
::
once_flag
cpu_alloctor_flag
;
std
::
once_flag
gpu_alloctor_flag
;
BuddyAllocator
*
GetCPUBuddyAllocator
()
{
static
std
::
unique_ptr
<
BuddyAllocator
,
void
(
*
)(
BuddyAllocator
*
)
>
a
{
nullptr
,
[](
BuddyAllocator
*
p
)
{
delete
p
;
}};
std
::
call_once
(
cpu_alloctor_flag
,
[
&
]()
{
a
.
reset
(
new
BuddyAllocator
(
new
detail
::
CPUAllocator
,
platform
::
CpuMinChunkSize
(),
platform
::
CpuMaxChunkSize
()));
});
return
a
.
get
();
}
template
<
>
...
...
@@ -48,20 +60,31 @@ size_t Used<platform::CPUPlace>(platform::CPUPlace place) {
#ifndef PADDLE_ONLY_CPU
detail
::
BuddyAllocator
*
GetGPUBuddyAllocator
(
int
gpu_id
)
{
static
detail
::
BuddyAllocator
**
as
=
NULL
;
if
(
as
==
NULL
)
{
BuddyAllocator
*
GetGPUBuddyAllocator
(
int
gpu_id
)
{
using
BuddyAllocVec
=
std
::
vector
<
BuddyAllocator
*>
;
static
std
::
unique_ptr
<
BuddyAllocVec
,
void
(
*
)(
BuddyAllocVec
*
p
)
>
as
{
new
std
::
vector
<
BuddyAllocator
*>
,
[](
BuddyAllocVec
*
p
)
{
std
::
for_each
(
p
->
begin
(),
p
->
end
(),
[](
BuddyAllocator
*
p
)
{
delete
p
;
});
}};
// GPU buddy alloctors
auto
&
alloctors
=
*
as
.
get
();
// GPU buddy allocator initialization
std
::
call_once
(
gpu_alloctor_flag
,
[
&
]()
{
int
gpu_num
=
platform
::
GetDeviceCount
();
a
s
=
new
detail
::
BuddyAllocator
*
[
gpu_num
]
;
a
lloctors
.
reserve
(
gpu_num
)
;
for
(
int
gpu
=
0
;
gpu
<
gpu_num
;
gpu
++
)
{
platform
::
SetDeviceId
(
gpu
);
a
s
[
gpu
]
=
new
detail
::
BuddyAllocator
(
new
detail
::
GPUAllocator
,
platform
::
GpuMinChunkSize
(),
platform
::
GpuMaxChunkSize
(
));
a
lloctors
.
emplace_back
(
new
BuddyAllocator
(
new
detail
::
GPUAllocator
,
platform
::
GpuMinChunkSize
(),
platform
::
GpuMaxChunkSize
()
));
}
}
});
platform
::
SetDeviceId
(
gpu_id
);
return
as
[
gpu_id
];
return
a
lloctor
s
[
gpu_id
];
}
template
<
>
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录