提交 daf73f94 编写于 作者: M mindspore-ci-bot 提交者: Gitee

!508 [Host+device] Update the turorials for some typos

Merge pull request !508 from Xiaoda/master
...@@ -69,7 +69,7 @@ Use the script `script/run_auto_parallel_train.sh`. Run the command `bash run_au ...@@ -69,7 +69,7 @@ Use the script `script/run_auto_parallel_train.sh`. Run the command `bash run_au
where the first `1` is the number of accelerators, the second `1` is the number of epochs, `DATASET` is the path of dataset, where the first `1` is the number of accelerators, the second `1` is the number of epochs, `DATASET` is the path of dataset,
and `RANK_TABLE_FILE` is the path of the above `rank_table_1p_0.json` file. and `RANK_TABLE_FILE` is the path of the above `rank_table_1p_0.json` file.
The running log is in the directory of `device_0`, where `loss.log` contains every loss value of every step in the epoch: The running log is in the directory of `device_0`, where `loss.log` contains every loss value of every step in the epoch. Here is an example:
``` ```
epoch: 1 step: 1, wide_loss is 0.6873926, deep_loss is 0.8878349 epoch: 1 step: 1, wide_loss is 0.6873926, deep_loss is 0.8878349
......
...@@ -65,7 +65,7 @@ ...@@ -65,7 +65,7 @@
使用训练脚本`script/run_auto_parallel_train.sh`。执行命令:`bash run_auto_parallel_train.sh 1 1 DATASET RANK_TABLE_FILE` 使用训练脚本`script/run_auto_parallel_train.sh`。执行命令:`bash run_auto_parallel_train.sh 1 1 DATASET RANK_TABLE_FILE`
其中第一个`1`表示用例使用的卡数,第二`1`表示训练的epoch数,`DATASET`是数据集所在路径,`RANK_TABLE_FILE`为上述`rank_table_1p_0.json`文件所在路径。 其中第一个`1`表示用例使用的卡数,第二`1`表示训练的epoch数,`DATASET`是数据集所在路径,`RANK_TABLE_FILE`为上述`rank_table_1p_0.json`文件所在路径。
运行日志保存在`device_0`目录下,其中`loss.log`保存一个epoch内中多个loss值,如下: 运行日志保存在`device_0`目录下,其中`loss.log`保存一个epoch内中多个loss值,其值类似如下:
``` ```
epoch: 1 step: 1, wide_loss is 0.6873926, deep_loss is 0.8878349 epoch: 1 step: 1, wide_loss is 0.6873926, deep_loss is 0.8878349
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册