Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
机器未来
Paddle
提交
54b6c390
P
Paddle
项目概览
机器未来
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
54b6c390
编写于
8月 04, 2021
作者:
zhouweiwei2014
提交者:
GitHub
8月 04, 2021
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix API bug of Tensor.cuda (#34416)
上级
1f0f5d3c
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
24 addition
and
7 deletion
+24
-7
paddle/fluid/pybind/imperative.cc
paddle/fluid/pybind/imperative.cc
+17
-7
python/paddle/fluid/tests/unittests/test_var_base.py
python/paddle/fluid/tests/unittests/test_var_base.py
+7
-0
未找到文件。
paddle/fluid/pybind/imperative.cc
浏览文件 @
54b6c390
...
...
@@ -1400,20 +1400,26 @@ void BindImperative(py::module *m_ptr) {
)DOC"
)
.
def
(
"cuda"
,
[](
const
std
::
shared_ptr
<
imperative
::
VarBase
>
&
self
,
int
device_id
,
bool
blocking
)
{
[](
const
std
::
shared_ptr
<
imperative
::
VarBase
>
&
self
,
py
::
handle
&
handle
,
bool
blocking
)
{
#if !defined(PADDLE_WITH_CUDA) && !defined(PADDLE_WITH_HIP)
PADDLE_THROW
(
platform
::
errors
::
PermissionDenied
(
"Cannot copy this Tensor to GPU in CPU version Paddle, "
"Please recompile or reinstall Paddle with CUDA support."
));
#else
int
device_count
=
platform
::
GetCUDADeviceCount
();
if
(
device_id
==
-
1
)
{
int
device_id
=
0
;
if
(
handle
==
py
::
none
())
{
if
(
platform
::
is_gpu_place
(
self
->
Place
()))
{
return
self
;
}
else
{
device_id
=
0
;
}
}
else
{
PyObject
*
py_obj
=
handle
.
ptr
();
PADDLE_ENFORCE_EQ
(
PyCheckInteger
(
py_obj
),
true
,
platform
::
errors
::
InvalidArgument
(
" 'device_id' must be a positive integer"
));
device_id
=
py
::
cast
<
int
>
(
handle
);
}
PADDLE_ENFORCE_GE
(
device_id
,
0
,
...
...
@@ -1437,26 +1443,30 @@ void BindImperative(py::module *m_ptr) {
}
#endif
},
py
::
arg
(
"device_id"
)
=
-
1
,
py
::
arg
(
"blocking"
)
=
true
,
R"DOC(
py
::
arg
(
"device_id"
)
=
py
::
none
()
,
py
::
arg
(
"blocking"
)
=
true
,
R"DOC(
Returns a copy of this Tensor in GPU memory.
If this Tensor is already in GPU memory and device_id is default,
then no copy is performed and the original Tensor is returned.
Args:
device_id(int, optional): The destination GPU device id. Default
s to the
current device.
device_id(int, optional): The destination GPU device id. Default
: None, means
current device.
blocking(bool, optional): If False and the source is in pinned memory, the copy will be
asynchronous with respect to the host. Otherwise, the argument has no effect. Default: False.
Examples:
.. code-block:: python
# required: gpu
import paddle
x = paddle.to_tensor(1.0, place=paddle.CPUPlace())
print(x.place) # CPUPlace
y = x.cuda()
print(y.place) # CUDAPlace(0)
y = x.cuda(None)
print(y.place) # CUDAPlace(0)
y = x.cuda(1)
print(y.place) # CUDAPlace(1)
...
...
python/paddle/fluid/tests/unittests/test_var_base.py
浏览文件 @
54b6c390
...
...
@@ -72,10 +72,17 @@ class TestVarBase(unittest.TestCase):
if
core
.
is_compiled_with_cuda
():
y
=
x
.
pin_memory
()
self
.
assertEqual
(
y
.
place
.
__repr__
(),
"CUDAPinnedPlace"
)
y
=
x
.
cuda
()
y
=
x
.
cuda
(
None
)
self
.
assertEqual
(
y
.
place
.
__repr__
(),
"CUDAPlace(0)"
)
y
=
x
.
cuda
(
device_id
=
0
)
self
.
assertEqual
(
y
.
place
.
__repr__
(),
"CUDAPlace(0)"
)
y
=
x
.
cuda
(
blocking
=
False
)
self
.
assertEqual
(
y
.
place
.
__repr__
(),
"CUDAPlace(0)"
)
y
=
x
.
cuda
(
blocking
=
True
)
self
.
assertEqual
(
y
.
place
.
__repr__
(),
"CUDAPlace(0)"
)
with
self
.
assertRaises
(
ValueError
):
y
=
x
.
cuda
(
"test"
)
# support 'dtype' is core.VarType
x
=
paddle
.
rand
((
2
,
2
))
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录