Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
机器未来
Paddle
提交
8ccc61f3
P
Paddle
项目概览
机器未来
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
8ccc61f3
编写于
3月 26, 2018
作者:
Q
Qiao Longfei
提交者:
GitHub
3月 26, 2018
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
support empty tensor (#9338)
* support empty tensor
上级
30f1bd6a
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
25 addition
and
7 deletion
+25
-7
paddle/fluid/framework/tensor_impl.h
paddle/fluid/framework/tensor_impl.h
+4
-4
paddle/fluid/memory/memory_test.cc
paddle/fluid/memory/memory_test.cc
+2
-2
python/paddle/fluid/tests/unittests/test_tensor.py
python/paddle/fluid/tests/unittests/test_tensor.py
+19
-1
未找到文件。
paddle/fluid/framework/tensor_impl.h
浏览文件 @
8ccc61f3
...
...
@@ -117,10 +117,10 @@ inline void* Tensor::mutable_data(platform::Place place, std::type_index type) {
if
(
holder_
!=
nullptr
)
{
holder_
->
set_type
(
type
);
}
PADDLE_ENFORCE_G
T
(
numel
(),
0
,
"When calling this method, the Tensor's numel must be
larger than zero. "
"Please check Tensor::Resize has been called first."
);
PADDLE_ENFORCE_G
E
(
numel
(),
0
,
"When calling this method, the Tensor's numel must be "
"equal or
larger than zero. "
"Please check Tensor::Resize has been called first."
);
int64_t
size
=
numel
()
*
SizeOfType
(
type
);
/* some versions of boost::variant don't have operator!= */
if
(
holder_
==
nullptr
||
!
(
holder_
->
place
()
==
place
)
||
...
...
paddle/fluid/memory/memory_test.cc
浏览文件 @
8ccc61f3
...
...
@@ -59,7 +59,7 @@ TEST(BuddyAllocator, CPUMultAlloc) {
EXPECT_EQ
(
total_size
,
0UL
);
for
(
auto
size
:
{
128
,
256
,
1024
,
4096
,
16384
,
65536
,
262144
,
1048576
,
4194304
})
{
{
0
,
128
,
256
,
1024
,
4096
,
16384
,
65536
,
262144
,
1048576
,
4194304
})
{
ps
[
paddle
::
memory
::
Alloc
(
cpu
,
size
)]
=
size
;
// Buddy Allocator doesn't manage too large memory chunk
...
...
@@ -117,7 +117,7 @@ TEST(BuddyAllocator, GPUMultAlloc) {
EXPECT_EQ
(
total_size
,
0UL
);
for
(
auto
size
:
{
128
,
256
,
1024
,
4096
,
16384
,
65536
,
262144
,
1048576
,
4194304
})
{
{
0
,
128
,
256
,
1024
,
4096
,
16384
,
65536
,
262144
,
1048576
,
4194304
})
{
ps
[
paddle
::
memory
::
Alloc
(
gpu
,
size
)]
=
size
;
// Buddy Allocator doesn't manage too large memory chunk
...
...
python/paddle/fluid/tests/unittests/test_tensor.py
浏览文件 @
8ccc61f3
...
...
@@ -126,7 +126,6 @@ class TestTensor(unittest.TestCase):
def
test_lod_tensor_gpu_init
(
self
):
if
not
core
.
is_compiled_with_cuda
():
return
scope
=
core
.
Scope
()
place
=
core
.
CUDAPlace
(
0
)
lod_py
=
[[
0
,
2
,
5
],
[
0
,
2
,
4
,
5
]]
lod_tensor
=
core
.
LoDTensor
()
...
...
@@ -144,6 +143,25 @@ class TestTensor(unittest.TestCase):
self
.
assertAlmostEqual
(
2.0
,
lod_v
[
0
,
0
,
0
,
1
])
self
.
assertListEqual
(
lod_py
,
lod_tensor
.
lod
())
def
test_empty_tensor
(
self
):
place
=
core
.
CPUPlace
()
scope
=
core
.
Scope
()
var
=
scope
.
var
(
"test_tensor"
)
tensor
=
var
.
get_tensor
()
tensor
.
set_dims
([
0
,
1
])
tensor
.
alloc_float
(
place
)
tensor_array
=
numpy
.
array
(
tensor
)
self
.
assertEqual
((
0
,
1
),
tensor_array
.
shape
)
if
core
.
is_compiled_with_cuda
():
gpu_place
=
core
.
CUDAPlace
(
0
)
tensor
.
alloc_float
(
gpu_place
)
tensor_array
=
numpy
.
array
(
tensor
)
self
.
assertEqual
((
0
,
1
),
tensor_array
.
shape
)
if
__name__
==
'__main__'
:
unittest
.
main
()
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录