提交 8f17df9a 编写于 作者: Y Yuexin Wu 提交者: A. Unique TensorFlower

Disable preemption_watcher as it's not supported externally.

PiperOrigin-RevId: 532853592
上级 0fbcb9fa
...@@ -53,7 +53,9 @@ def _run_experiment_with_preemption_recovery(params, model_dir): ...@@ -53,7 +53,9 @@ def _run_experiment_with_preemption_recovery(params, model_dir):
**params.runtime.model_parallelism()) **params.runtime.model_parallelism())
with distribution_strategy.scope(): with distribution_strategy.scope():
task = task_factory.get_task(params.task, logging_dir=model_dir) task = task_factory.get_task(params.task, logging_dir=model_dir)
preemption_watcher = tf.distribute.experimental.PreemptionWatcher() # pylint: disable=line-too-long
preemption_watcher = None # copybara-replace
# pylint: enable=line-too-long
train_lib.run_experiment( train_lib.run_experiment(
distribution_strategy=distribution_strategy, distribution_strategy=distribution_strategy,
......
...@@ -46,7 +46,9 @@ def _run_experiment_with_preemption_recovery(params, model_dir): ...@@ -46,7 +46,9 @@ def _run_experiment_with_preemption_recovery(params, model_dir):
tpu_address=params.runtime.tpu) tpu_address=params.runtime.tpu)
with distribution_strategy.scope(): with distribution_strategy.scope():
task = task_factory.get_task(params.task, logging_dir=model_dir) task = task_factory.get_task(params.task, logging_dir=model_dir)
preemption_watcher = tf.distribute.experimental.PreemptionWatcher() # pylint: disable=line-too-long
preemption_watcher = None # copybara-replace
# pylint: enable=line-too-long
train_lib.run_experiment( train_lib.run_experiment(
distribution_strategy=distribution_strategy, distribution_strategy=distribution_strategy,
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册