support config different port for different trainer in cluster mode
Created by: jacquesqiao
now the configuration for distributed paddle is:
--pservers=host1,host2 --port=1234
It have some problems:
- we can only start one trainer on one machine because all trainer share the same port for communication.
- If the port on one of all the machines is used by other program, the whole job will fail.
Can we modify it to the format like tensorflow, for example:
--cluster_spec='worker|host1:port1,ps|host2:port2'
so that we can start multi trainer on one machine.