未验证 提交 31d38900 编写于 作者: J josh11b 提交者: GitHub

Merge pull request #5598 from tensorflow/josh11b-patch-1

AllReduceCrossTowerOps -> AllReduceCrossDeviceOps
......@@ -27,8 +27,9 @@ def get_distribution_strategy(num_gpus, all_reduce_alg=None):
Args:
num_gpus: Number of GPUs to run this model.
all_reduce_alg: Specify which algorithm to use when performing all-reduce.
See tf.contrib.distribute.AllReduceCrossTowerOps for available algorithms.
If None, DistributionStrategy will choose based on device topology.
See tf.contrib.distribute.AllReduceCrossDeviceOps for available
algorithms. If None, DistributionStrategy will choose based on device
topology.
Returns:
tf.contrib.distribute.DistibutionStrategy object.
......@@ -41,7 +42,7 @@ def get_distribution_strategy(num_gpus, all_reduce_alg=None):
if all_reduce_alg:
return tf.contrib.distribute.MirroredStrategy(
num_gpus=num_gpus,
cross_tower_ops=tf.contrib.distribute.AllReduceCrossTowerOps(
cross_tower_ops=tf.contrib.distribute.AllReduceCrossDeviceOps(
all_reduce_alg, num_packs=2))
else:
return tf.contrib.distribute.MirroredStrategy(num_gpus=num_gpus)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册