未验证 提交 1cb4c154 编写于 作者: 李季 提交者: GitHub

Fix the bug in paddle.distributed.split demo. The paddle.distributed.split op...

Fix the bug in paddle.distributed.split demo. The paddle.distributed.split op just can be used in static mode. (#34306)

* fix the bug in paddle.distributed.split demo
上级 b5d8f43e
...@@ -1341,20 +1341,20 @@ def split(x, ...@@ -1341,20 +1341,20 @@ def split(x,
Examples: Examples:
.. code-block:: python .. code-block:: python
# required: distributed
import paddle import paddle
from paddle.distributed import init_parallel_env import paddle.distributed.fleet as fleet
# required: gpu
paddle.enable_static()
paddle.set_device('gpu:%d'%paddle.distributed.ParallelEnv().dev_id) paddle.set_device('gpu:%d'%paddle.distributed.ParallelEnv().dev_id)
init_parallel_env() fleet.init(is_collective=True)
data = paddle.randint(0, 8, shape=[10,4]) data = paddle.randint(0, 8, shape=[10,4])
emb_out = paddle.distributed.split( emb_out = paddle.distributed.split(
data, data,
(8, 8), (8, 8),
operation="embedding", operation="embedding",
num_partitions=2) num_partitions=2)
""" """
assert isinstance(size, (list, tuple)), ( assert isinstance(size, (list, tuple)), (
"The type of size for " "The type of size for "
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册