Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
机器未来
Paddle
提交
4ff237f9
P
Paddle
项目概览
机器未来
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
4ff237f9
编写于
4月 06, 2018
作者:
C
chengduoZH
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
follow comments
上级
17842e33
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
12 addition
and
7 deletion
+12
-7
paddle/fluid/framework/tensor_impl.h
paddle/fluid/framework/tensor_impl.h
+2
-3
paddle/fluid/pybind/tensor_py.h
paddle/fluid/pybind/tensor_py.h
+10
-4
未找到文件。
paddle/fluid/framework/tensor_impl.h
浏览文件 @
4ff237f9
...
...
@@ -132,8 +132,7 @@ inline void* Tensor::mutable_data(platform::Place place, std::type_index type) {
platform
::
is_cuda_pinned_place
(
place
))
{
#ifndef PADDLE_WITH_CUDA
PADDLE_THROW
(
"'CUDAPlace' or 'CUDAPinnedPlace' is not supported in CPU only "
"device."
);
"CUDAPlace or CUDAPinnedPlace is not supported in CPU-only mode."
);
}
#else
if
(
platform
::
is_gpu_place
(
place
))
{
...
...
@@ -153,7 +152,7 @@ inline void* Tensor::mutable_data(platform::Place place, std::type_index type) {
inline
void
*
Tensor
::
mutable_data
(
platform
::
Place
place
)
{
PADDLE_ENFORCE
(
this
->
holder_
!=
nullptr
,
"Cannot invoke mutable data if current hold nothing"
);
"Cannot invoke mutable data if current hold nothing
.
"
);
return
mutable_data
(
place
,
holder_
->
type
());
}
...
...
paddle/fluid/pybind/tensor_py.h
浏览文件 @
4ff237f9
...
...
@@ -143,7 +143,7 @@ void PyCPUTensorSetFromArray(
std
::
vector
<
int64_t
>
dims
;
dims
.
reserve
(
array
.
ndim
());
for
(
size_t
i
=
0
;
i
<
array
.
ndim
();
++
i
)
{
dims
.
push_back
(
(
int
)
array
.
shape
()[
i
]
);
dims
.
push_back
(
static_cast
<
int
>
(
array
.
shape
()[
i
])
);
}
self
.
Resize
(
framework
::
make_ddim
(
dims
));
...
...
@@ -152,6 +152,8 @@ void PyCPUTensorSetFromArray(
}
template
<
>
// This following specialization maps uint16_t in the parameter type to
// platform::float16.
void
PyCPUTensorSetFromArray
(
framework
::
Tensor
&
self
,
py
::
array_t
<
uint16_t
,
py
::
array
::
c_style
|
py
::
array
::
forcecast
>
array
,
...
...
@@ -159,7 +161,7 @@ void PyCPUTensorSetFromArray(
std
::
vector
<
int64_t
>
dims
;
dims
.
reserve
(
array
.
ndim
());
for
(
size_t
i
=
0
;
i
<
array
.
ndim
();
++
i
)
{
dims
.
push_back
(
(
int
)
array
.
shape
()[
i
]
);
dims
.
push_back
(
static_cast
<
int
>
(
array
.
shape
()[
i
])
);
}
self
.
Resize
(
framework
::
make_ddim
(
dims
));
...
...
@@ -176,7 +178,7 @@ void PyCUDATensorSetFromArray(
std
::
vector
<
int64_t
>
dims
;
dims
.
reserve
(
array
.
ndim
());
for
(
size_t
i
=
0
;
i
<
array
.
ndim
();
++
i
)
{
dims
.
push_back
(
(
int
)
array
.
shape
()[
i
]
);
dims
.
push_back
(
static_cast
<
int
>
(
array
.
shape
()[
i
])
);
}
self
.
Resize
(
framework
::
make_ddim
(
dims
));
...
...
@@ -190,6 +192,8 @@ void PyCUDATensorSetFromArray(
}
template
<
>
// This following specialization maps uint16_t in the parameter type to
// platform::float16.
void
PyCUDATensorSetFromArray
(
framework
::
Tensor
&
self
,
py
::
array_t
<
uint16_t
,
py
::
array
::
c_style
|
py
::
array
::
forcecast
>
array
,
...
...
@@ -197,7 +201,7 @@ void PyCUDATensorSetFromArray(
std
::
vector
<
int64_t
>
dims
;
dims
.
reserve
(
array
.
ndim
());
for
(
size_t
i
=
0
;
i
<
array
.
ndim
();
++
i
)
{
dims
.
push_back
(
(
int
)
array
.
shape
()[
i
]
);
dims
.
push_back
(
static_cast
<
int
>
(
array
.
shape
()[
i
])
);
}
self
.
Resize
(
framework
::
make_ddim
(
dims
));
...
...
@@ -228,6 +232,8 @@ void PyCUDAPinnedTensorSetFromArray(
}
template
<
>
// This following specialization maps uint16_t in the parameter type to
// platform::float16.
void
PyCUDAPinnedTensorSetFromArray
(
framework
::
Tensor
&
self
,
py
::
array_t
<
uint16_t
,
py
::
array
::
c_style
|
py
::
array
::
forcecast
>
array
,
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录