Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • Paddle
  • Issue
  • #24297

P
Paddle
  • 项目概览

PaddlePaddle / Paddle
大约 2 年 前同步成功

通知 2325
Star 20933
Fork 5424
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 1423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
P
Paddle
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 1,423
    • Issue 1,423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
    • 合并请求 543
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 4月 30, 2020 by saxon_zh@saxon_zhGuest

请教allgather使用教程

Created by: Meiyim

hi~ 我目前在尝试paddle分布式通信API:

https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/layers/collective.py#L108

在paddle 1.7.1 cuda10.1 动态图模式下,直接尝试调用出现问题(静态图也会发生类似问题),辛苦帮忙排查一下是否用法有误~

执行方法:

python3 -m paddle.distributed.launch test.py

测试代码:

from __future__ import division
from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals

import sys
import argparse
import logging

import paddle.fluid as F
import paddle.fluid.layers as L
import paddle.fluid.dygraph as D


place = F.CUDAPlace(D.parallel.Env().dev_id)
D.guard(place).__enter__()

class M(D.Layer):
    def __init__(self):
        super().__init__()

    def forward(self, i):
        i = L.collective._c_allgather(i, D.parallel.Env().nranks, did, use_calc_stream=True)
        return i

model = M()
ctx = D.parallel.prepare_context()
model = D.parallel.DataParallel(model, ctx)

did = D.parallel.Env().dev_id
a = model(L.ones([3,10,3], dtype='float32') * did)
print(a.numpy())

报错信息为:Error: comunicator of ring id 1 has not been initialized

检查NCCL以及V_Log,感觉nccl应该是已经初始化了才对(rank0、rank1的log类似):

I0430 21:35:35.526125 67179 nccl_context.cc:127] init nccl context nranks: 2 local rank: 1 gpu id: 1
yq01-sys-hic-p40-box-a12-0233:67179:67179 [1] INFO NET : Using interface eth0:10.255.138.19<0>
yq01-sys-hic-p40-box-a12-0233:67179:67179 [1] INFO NET/IB : Using interface eth0 for sideband communication
yq01-sys-hic-p40-box-a12-0233:67179:67179 [1] INFO NET/IB: [0] mlx5_0:1/RoCE
yq01-sys-hic-p40-box-a12-0233:67179:67179 [1] INFO Using internal Network IB
yq01-sys-hic-p40-box-a12-0233:67179:67179 [1] INFO Using NCCL Low-latency algorithm for sizes below 16384
yq01-sys-hic-p40-box-a12-0233:67179:67179 [1] INFO NET/IB: Dev 0 Port 1 qpn 2830 mtu 3 GID 3 (0/138AFF0AFFFF0000)
yq01-sys-hic-p40-box-a12-0233:67179:67179 [1] INFO CUDA Dev 1, IB Ports : mlx5_0/1(SOC)
yq01-sys-hic-p40-box-a12-0233:67179:67179 [1] INFO 1[67179] -> 0[67148] via direct shared memory
W0430 21:35:36.673970 67179 device_context.cc:237] Please NOTE: device: 1, CUDA Capability: 61, Driver API Version: 10.2, Runtime API Version: 10.0
W0430 21:35:36.676888 67179 device_context.cc:245] device: 1, cuDNN Version: 7.0.
W0430 21:35:36.676926 67179 device_context.cc:271] WARNING: device: 1. The installed Paddle is compiled with CUDNN 7.6, but CUDNN version in your machine is 7.0, which may cause serious incompatible bug. Please recompile or reinstall Paddle with compatible CUDNN version.
I0430 21:35:37.551332 67179 tracer.cc:105] Trace Op: fill_constant
I0430 21:35:37.551908 67179 tracer.cc:105] Trace Op: scale
I0430 21:35:37.552693 67179 tracer.cc:85] Trace Op: c_allgather
Traceback (most recent call last):
  File "test_collective.py", line 45, in <module>
    a = model(L.ones([3,10,3], dtype='float32') * did)
  File "/home/work/chenxuyi/dis/dyg/app/lib/python3.6/site-packages/paddle/fluid/dygraph/layers.py", line 304, in __call__
    outputs = self.forward(*inputs, **kwargs)
  File "/home/work/chenxuyi/dis/dyg/app/lib/python3.6/site-packages/paddle/fluid/dygraph/parallel.py", line 148, in forward
    return self._layers(*inputs, **kwargs)
  File "/home/work/chenxuyi/dis/dyg/app/lib/python3.6/site-packages/paddle/fluid/dygraph/layers.py", line 304, in __call__
    outputs = self.forward(*inputs, **kwargs)
  File "test_collective.py", line 37, in forward
    i = L.collective._c_allgather(i, D.parallel.Env().nranks, did, use_calc_stream=True)
  File "/home/work/chenxuyi/dis/dyg/app/lib/python3.6/site-packages/paddle/fluid/layers/collective.py", line 128, in _c_allgather
    'use_calc_stream': use_calc_stream
  File "/home/work/chenxuyi/dis/dyg/app/lib/python3.6/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
    return self.main_program.current_block().append_op(*args, **kwargs)
  File "/home/work/chenxuyi/dis/dyg/app/lib/python3.6/site-packages/paddle/fluid/framework.py", line 2516, in append_op
    kwargs.get("stop_gradient", False))
  File "/home/work/chenxuyi/dis/dyg/app/lib/python3.6/site-packages/paddle/fluid/dygraph/tracer.py", line 39, in trace_op
    not stop_gradient)
paddle.fluid.core_avx.EnforceNotMet:

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int)
2   paddle::platform::NCCLCommContext::Get(int, int) const
3   paddle::operators::CAllGatherOpCUDAKernel<float>::Compute(paddle::framework::ExecutionContext const&) const
4   std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CUDAPlace, false, 0ul, paddle::operators::CAllGatherOpCUDAKernel<float>, paddle::operators::CAllGatherOpCUDAKernel<double>, paddle::
operators::CAllGatherOpCUDAKernel<int>, paddle::operators::CAllGatherOpCUDAKernel<long>, paddle::operators::CAllGatherOpCUDAKernel<paddle::platform::float16> >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&)#1}>::_M_in
voke(std::_Any_data const&, paddle::framework::ExecutionContext const&)
5   paddle::imperative::PreparedOp::Run(std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > >, std::less<std::string>, std::allocator<std::pair<std::string const, std::vector<std::sh
ared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const*, std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > >, std::less<
std::string>, std::allocator<std::pair<std::string const, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const*, std::unordered_map<std::string, boost::variant<boost::blank, int, float, std::
string, std::vector<int, std::allocator<int> >, std::vector<float, std::allocator<float> >, std::vector<std::string, std::allocator<std::string> >, bool, std::vector<bool, std::allocator<bool> >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDes
c*, std::allocator<paddle::framework::BlockDesc*> >, std::vector<long, std::allocator<long> >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant
::void_, boost::detail::variant::void_>, std::hash<std::string>, std::equal_to<std::string>, std::allocator<std::pair<std::string const, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator<int> >, std::vector<float, std::allocator<float>
>, std::vector<std::string, std::allocator<std::string> >, bool, std::vector<bool, std::allocator<bool> >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocator<paddle::framework::BlockDesc*> >, std::vector<long, std::allocator<lon
g> >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> > > > const*)
6   paddle::imperative::OpBase::Run(std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > >, std::less<std::string>, std::allocator<std::pair<std::string const, std::vector<std::shared
_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const&, std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > >, std::less<std:
:string>, std::allocator<std::pair<std::string const, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const&)
7   paddle::imperative::Tracer::TraceOp(std::string const&, std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > >, std::less<std::string>, std::allocator<std::pair<std::string const,
 std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const&, std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarB
ase> > >, std::less<std::string>, std::allocator<std::pair<std::string const, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const&, std::unordered_map<std::string, boost::variant<boost::blan
k, int, float, std::string, std::vector<int, std::allocator<int> >, std::vector<float, std::allocator<float> >, std::vector<std::string, std::allocator<std::string> >, bool, std::vector<bool, std::allocator<bool> >, paddle::framework::BlockDesc*, long, std::vector<paddle:
:framework::BlockDesc*, std::allocator<paddle::framework::BlockDesc*> >, std::vector<long, std::allocator<long> >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, bo
ost::detail::variant::void_, boost::detail::variant::void_>, std::hash<std::string>, std::equal_to<std::string>, std::allocator<std::pair<std::string const, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator<int> >, std::vector<float, st
d::allocator<float> >, std::vector<std::string, std::allocator<std::string> >, bool, std::vector<bool, std::allocator<bool> >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocator<paddle::framework::BlockDesc*> >, std::vector<long
, std::allocator<long> >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> > > >, paddle::platform::Plac
e const&, bool)
----------------------
Error Message Summary:
----------------------
Error: comunicator of ring id 1 has not been initialized
  [Hint: Expected comm_map_.count(ring_id) > 0, but received comm_map_.count(ring_id):0 <= 0:0.] at (/paddle/paddle/fluid/platform/collective_helper.h:89)
指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/Paddle#24297
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7