提交 17590a7c 编写于 作者: M mindspore-ci-bot 提交者: Gitee

!3 Correct spelling errors

Merge pull request !3 from leiyuning/fix_spell
......@@ -18,6 +18,7 @@
| EulerOS | Euler operating system, which is developed by Huawei based on the standard Linux kernel. |
| FC Layer | Fully connected layer, which acts as a classifier in the entire convolutional neural network. |
| FE | Fusion Engine, which connects to GE and TBE operators and has the capabilities of loading and managing the operator information library and managing convergence rules. |
| Fine-tuning | A process to take a network model that has already been trained for a given task, and make it perform a second similar task. |
| FP16 | 16-bit floating point, which is a half-precision floating point arithmetic format, consuming less memory. |
| FP32 | 32-bit floating point, which is a single-precision floating point arithmetic format. |
| GE | Graph Engine, MindSpore computational graph execution engine, which is responsible for optimizing hardware (such as operator fusion and memory overcommitment) based on the front-end computational graph and starting tasks on the device side. |
......
......@@ -18,6 +18,7 @@
| EulerOS | 欧拉操作系统,华为自研的基于Linux标准内核的操作系统。 |
| FC Layer | Fully Conneted Layer,全连接层。整个卷积神经网络中起到分类器的作用。 |
| FE | Fusion Engine,负责对接GE和TBE算子,具备算子信息库的加载与管理、融合规则管理等能力。 |
| Fine-tuning | 基于面向某任务训练的网络模型,训练面向第二个类似任务的网络模型。 |
| FP16 | 16位浮点,半精度浮点算术,消耗更小内存。 |
| FP32 | 32位浮点,单精度浮点算术。 |
| GE | Graph Engine,MindSpore计算图执行引擎,主要负责根据前端的计算图完成硬件相关的优化(算子融合、内存复用等等)、device侧任务启动。 |
......
......@@ -317,7 +317,7 @@ if __name__ == "__main__":
### 配置模型保存
MindSpore提供了callback机制,可以在训练过程中执行自定义逻辑,这里使用框架提供的`ModelCheckpoint``LossMonitor`为例。
`ModelCheckpoint`可以保存网络模型和参数,以便进行后续的微调(fune-tune)操作,`LossMonitor`可以监控训练过程中`loss`值的变化。
`ModelCheckpoint`可以保存网络模型和参数,以便进行后续的fine-tuning(微调)操作,`LossMonitor`可以监控训练过程中`loss`值的变化。
```python
from mindspore.train.callback import ModelCheckpoint, CheckpointConfig
......
......@@ -23,7 +23,7 @@
- 训练过程中,通过实时验证精度,把精度最高的模型参数保存下来,用于预测操作。
- 再训练场景
- 进行长时间训练任务时,保存训练过程中的CheckPoint文件,防止任务异常退出后从初始状态开始训练。
- Fine Tune(微调):训练一个模型并保存参数,然后针对不同任务进行Fine Tune操作
- Fine-tuning(微调)场景,即训练一个模型并保存参数,基于该模型,面向第二个类似任务进行模型训练
MindSpore的CheckPoint文件是一个二进制文件,存储了所有训练参数的值。采用了Google的Protocol Buffers机制,与开发语言、平台无关,具有良好的可扩展性。
CheckPoint的protocol格式定义在`mindspore/ccsrc/utils/checkpoint.proto`中。
......@@ -117,7 +117,7 @@ acc = model.eval(dataset_eval)
### 用于再训练场景
针对任务中断再训练及Fine Tune场景,可以加载网络参数和优化器参数到模型中。
针对任务中断再训练及fine-tuning场景,可以加载网络参数和优化器参数到模型中。
示例代码如下:
```python
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册