From dfe3ee7445c4634c700a32072215e713cc91a560 Mon Sep 17 00:00:00 2001 From: wukesong Date: Fri, 31 Jul 2020 11:41:37 +0800 Subject: [PATCH] modify lossmonitor location --- tutorials/source_en/quick_start/quick_start.md | 6 +++--- tutorials/source_zh_cn/quick_start/quick_start.md | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/tutorials/source_en/quick_start/quick_start.md b/tutorials/source_en/quick_start/quick_start.md index 7599bd4b..700c262d 100644 --- a/tutorials/source_en/quick_start/quick_start.md +++ b/tutorials/source_en/quick_start/quick_start.md @@ -316,8 +316,8 @@ if __name__ == "__main__": ### Saving the Configured Model -MindSpore provides the callback mechanism to execute customized logic during training. `ModelCheckpoint` and `LossMonitor` provided by the framework are used in this example. -`ModelCheckpoint` can save network models and parameters for subsequent fine-tuning. `LossMonitor` can monitor the changes of the `loss` value during training. +MindSpore provides the callback mechanism to execute customized logic during training. `ModelCheckpoint` provided by the framework is used in this example. +`ModelCheckpoint` can save network models and parameters for subsequent fine-tuning. ```python from mindspore.train.callback import ModelCheckpoint, CheckpointConfig @@ -333,7 +333,7 @@ if __name__ == "__main__": ### Configuring the Network Training -Use the `model.train` API provided by MindSpore to easily train the network. +Use the `model.train` API provided by MindSpore to easily train the network. `LossMonitor` can monitor the changes of the `loss` value during training. In this example, set `epoch_size` to 1 to train the dataset for five iterations. ```python diff --git a/tutorials/source_zh_cn/quick_start/quick_start.md b/tutorials/source_zh_cn/quick_start/quick_start.md index 365c41ea..1e188772 100644 --- a/tutorials/source_zh_cn/quick_start/quick_start.md +++ b/tutorials/source_zh_cn/quick_start/quick_start.md @@ -317,8 +317,8 @@ if __name__ == "__main__": ### 配置模型保存 -MindSpore提供了callback机制,可以在训练过程中执行自定义逻辑,这里使用框架提供的`ModelCheckpoint`和`LossMonitor`为例。 -`ModelCheckpoint`可以保存网络模型和参数,以便进行后续的fine-tuning(微调)操作,`LossMonitor`可以监控训练过程中`loss`值的变化。 +MindSpore提供了callback机制,可以在训练过程中执行自定义逻辑,这里使用框架提供的`ModelCheckpoint`为例。 +`ModelCheckpoint`可以保存网络模型和参数,以便进行后续的fine-tuning(微调)操作。 ```python from mindspore.train.callback import ModelCheckpoint, CheckpointConfig @@ -334,7 +334,7 @@ if __name__ == "__main__": ### 配置训练网络 -通过MindSpore提供的`model.train`接口可以方便地进行网络的训练。 +通过MindSpore提供的`model.train`接口可以方便地进行网络的训练。`LossMonitor`可以监控训练过程中`loss`值的变化。 这里把`epoch_size`设置为1,对数据集进行1个迭代的训练。 -- GitLab