socket connected error with sparse remote updater when do distribute training jobs
Created by: dzhwinter
user report a error in MPI job, which give the exception in rank0 in below
F0616 11:10:25.841265 16207 LightNetwork.cpp:379] Check failed: connect(sockfd, (sockaddr *)&serv_addr, sizeof(serv_addr)) >= 0 ERROR connecting to 10.87.138.24: Connection refused [111]
F0616 11:10:25.841341 16197 LightNetwork.cpp:379] Check failed: connect(sockfd, (sockaddr *)&serv_addr, sizeof(serv_addr)) >= 0 ERROR connecting to 10.87.138.23: Connection refused [111]
we found that there is always one node failed connect to rank0, then its trainer will be killed, then this error will spread in all parameter servers, which lead to a failed training job.
Many thanks to @typhoonzero pointed the receiver error. we found that user just starts parameter server without a sparse port, but the trainer has one. when the trainer tries to connect the socket with sparse port, this job will throw an exception in above.
It is just a misuse case of the sparse remote updater, but we can learn some lessons from it. Firstly, the system should not contain any condition implicitly, in this case, paddle_trainer make an automatic convert when user config cannot satisfy the setup requirement, which confuses me at the first glance.
I0616 11:10:18.829900 16197 Trainer.cpp:167] trainer mode: SgdSparseCpuTraining
I0616 11:10:18.829938 16197 TrainerInternal.cpp:237] Sgd sparse training can not work with ConcurrentRemoteParameterUpdater, automatically reset --use_old_updater=true
Secondly, we need to log accurate information when it meets with any exception. The trainer try to connect to another socket with the sparse port. But the log information shows us with another round of connected loop, which is same with the dense port one. As a result, we are confused that trainer have connected with its partner, it stuck in a retry logic.