多机多卡并行训练,多个节点通信如何配置
Created by: gobigrassland
多机多卡并行实验出错,错误日志如下:
File "train.py", line 55, in <module>
main()
File "train.py", line 51, in main
ins.train()
File "/export/data/PLSC-master/plsc/entry.py", line 927, in train
exe.run(self.startup_program)
File "/usr/local/lib/python2.7/dist-packages/paddle/fluid/executor.py", line 783, in run
six.reraise(*sys.exc_info())
File "/usr/local/lib/python2.7/dist-packages/paddle/fluid/executor.py", line 778, in run
use_program_cache=use_program_cache)
File "/usr/local/lib/python2.7/dist-packages/paddle/fluid/executor.py", line 831, in _run_impl
use_program_cache=use_program_cache)
File "/usr/local/lib/python2.7/dist-packages/paddle/fluid/executor.py", line 905, in _run_program
fetch_var_name)
paddle.fluid.core_avx.EnforceNotMet:
--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0 std::string paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2 paddle::platform::NCCLCommContext::CreateNCCLComm(ncclUniqueId*, int, int, int, int)
3 paddle::operators::CCommInitOp::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const
4 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&)
5 paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool)
6 paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector<std::string, std::allocator<std::string> > const&, bool, bool)
----------------------
Error Message Summary:
----------------------
Error: An error occurred here. There is no accurate error hint for this error yet. We are continuously in the process of increasing hint for this kind of error check. It would be helpful if you could inform us of how this conversion went by opening a github issue. And we will resolve it with high priority.
- New issue link: https://github.com/PaddlePaddle/Paddle/issues/new
- Recommended issue content: all error stack information
[unhandled system error] at (/paddle/paddle/fluid/platform/collective_helper.cc:67)
[operator < c_comm_init > error]
针对这些问题,我做的尝试与分析 (1)出现上述错误的主要原因,是所使用的两台物理机通信端口可能被公司禁用了(具体我也不了解,比如22端口)。但是机器之间通信并没有禁止。经常通过nc命令或者wget命令可以将不同机器间的数据进行传输。
(2)我进行多机并行实验,一般都是在物理机上分别启动docker容器,然后通过SSH免密方式,让这几台机器可以相互登录。这种方法,在我们的机器上,使用tensorflow和pytorch框架进行多机并行都可以正常运行。但是使用paddle及PLSC就出现了上述问题。 (3)为了验证是否是物理机通信问题,我又在同一台物理机上,新建2个docker容器,然后通过同样的SSH免密方式,让两个容器可以相互登录,然后就代码是可以正常运行的
所以我基于上述分析,只能得出我们机器间通信存在某些限制,但是我们机器确实也是通的
请教一下,你们多机是如何通信,如何才能搭建一个分布式训练的集群,能多机多卡训练?