diff --git a/docs/1.0/distributed_deprecated.md b/docs/1.0/distributed_deprecated.md index 8679545d7ab86847f6ac60aa927a6d8964b7cb62..c300205d496354881afec3eea76e2720b8face0a 100644 --- a/docs/1.0/distributed_deprecated.md +++ b/docs/1.0/distributed_deprecated.md @@ -335,30 +335,27 @@ torch.distributed.launch 是一个模块,它在每个训练节点上产生多 ### 如何使用这个模块: 1. 单节点多进程分布式训练 + <<>> python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE - --nnodes=2 --node_rank=0 --master_addr="192.168.1.1" - --master_port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 - and all other arguments of your training script) -''' + + >>> python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE + --nnodes=2 --node_rank=0 --master_addr="192.168.1.1" + --master_port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 + and all other arguments of your training script) + 节点2: -'''python ->>> python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE - --nnodes=2 --node_rank=1 --master_addr="192.168.1.1" - --master_port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 - and all other arguments of your training script) - ''' + + >>> python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE + --nnodes=2 --node_rank=1 --master_addr="192.168.1.1" + --master_port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 + and all other arguments of your training script) 1. 要查找此模块提供的可选参数: