diff --git a/tutorials/source_en/quick_start/quick_start.md b/tutorials/source_en/quick_start/quick_start.md index 7599bd4b9285f43db1458c18056bf1ee524e2318..700c262d5650d4c8d9be78ac8c42a6af1f95f283 100644 --- a/tutorials/source_en/quick_start/quick_start.md +++ b/tutorials/source_en/quick_start/quick_start.md @@ -316,8 +316,8 @@ if __name__ == "__main__": ### Saving the Configured Model -MindSpore provides the callback mechanism to execute customized logic during training. `ModelCheckpoint` and `LossMonitor` provided by the framework are used in this example. -`ModelCheckpoint` can save network models and parameters for subsequent fine-tuning. `LossMonitor` can monitor the changes of the `loss` value during training. +MindSpore provides the callback mechanism to execute customized logic during training. `ModelCheckpoint` provided by the framework is used in this example. +`ModelCheckpoint` can save network models and parameters for subsequent fine-tuning. ```python from mindspore.train.callback import ModelCheckpoint, CheckpointConfig @@ -333,7 +333,7 @@ if __name__ == "__main__": ### Configuring the Network Training -Use the `model.train` API provided by MindSpore to easily train the network. +Use the `model.train` API provided by MindSpore to easily train the network. `LossMonitor` can monitor the changes of the `loss` value during training. In this example, set `epoch_size` to 1 to train the dataset for five iterations. ```python diff --git a/tutorials/source_zh_cn/quick_start/quick_start.md b/tutorials/source_zh_cn/quick_start/quick_start.md index 365c41ea814770a800be43064eca015652ed6b06..1e1887722d0a92fb9d749409ee412bf3c9907b70 100644 --- a/tutorials/source_zh_cn/quick_start/quick_start.md +++ b/tutorials/source_zh_cn/quick_start/quick_start.md @@ -317,8 +317,8 @@ if __name__ == "__main__": ### 配置模型保存 -MindSpore提供了callback机制,可以在训练过程中执行自定义逻辑,这里使用框架提供的`ModelCheckpoint`和`LossMonitor`为例。 -`ModelCheckpoint`可以保存网络模型和参数,以便进行后续的fine-tuning(微调)操作,`LossMonitor`可以监控训练过程中`loss`值的变化。 +MindSpore提供了callback机制,可以在训练过程中执行自定义逻辑,这里使用框架提供的`ModelCheckpoint`为例。 +`ModelCheckpoint`可以保存网络模型和参数,以便进行后续的fine-tuning(微调)操作。 ```python from mindspore.train.callback import ModelCheckpoint, CheckpointConfig @@ -334,7 +334,7 @@ if __name__ == "__main__": ### 配置训练网络 -通过MindSpore提供的`model.train`接口可以方便地进行网络的训练。 +通过MindSpore提供的`model.train`接口可以方便地进行网络的训练。`LossMonitor`可以监控训练过程中`loss`值的变化。 这里把`epoch_size`设置为1,对数据集进行1个迭代的训练。