Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
magicwindyyd
mindspore
提交
be13688e
M
mindspore
项目概览
magicwindyyd
/
mindspore
与 Fork 源项目一致
Fork自
MindSpore / mindspore
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
M
mindspore
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
be13688e
编写于
8月 28, 2020
作者:
M
mindspore-ci-bot
提交者:
Gitee
8月 28, 2020
浏览文件
操作
浏览文件
下载
差异文件
!5431 update lstm readme
Merge pull request !5431 from caojian05/ms_lstm_readme_update
上级
6fd3f2e9
c465f2bf
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
153 addition
and
40 deletion
+153
-40
model_zoo/official/nlp/lstm/README.md
model_zoo/official/nlp/lstm/README.md
+81
-40
model_zoo/official/nlp/lstm/script/run_eval_cpu.sh
model_zoo/official/nlp/lstm/script/run_eval_cpu.sh
+37
-0
model_zoo/official/nlp/lstm/script/run_train_cpu.sh
model_zoo/official/nlp/lstm/script/run_train_cpu.sh
+35
-0
未找到文件。
model_zoo/official/nlp/lstm/README.md
浏览文件 @
be13688e
...
@@ -58,6 +58,16 @@ LSTM contains embeding, encoder and decoder modules. Encoder module consists of
...
@@ -58,6 +58,16 @@ LSTM contains embeding, encoder and decoder modules. Encoder module consists of
bash run_eval_gpu.sh 0 ./aclimdb ./glove_dir lstm-20_390.ckpt
bash run_eval_gpu.sh 0 ./aclimdb ./glove_dir lstm-20_390.ckpt
```
```
-
runing on CPU
```
bash
# run training example
bash run_train_cpu.sh ./aclimdb ./glove_dir
# run evaluation example
bash run_eval_cpu.sh ./aclimdb ./glove_dir lstm-20_390.ckpt
```
# [Script Description](#contents)
# [Script Description](#contents)
...
@@ -69,14 +79,16 @@ LSTM contains embeding, encoder and decoder modules. Encoder module consists of
...
@@ -69,14 +79,16 @@ LSTM contains embeding, encoder and decoder modules. Encoder module consists of
├── README.md
# descriptions about LSTM
├── README.md
# descriptions about LSTM
├── script
├── script
│ ├── run_eval_gpu.sh
# shell script for evaluation on GPU
│ ├── run_eval_gpu.sh
# shell script for evaluation on GPU
│ └── run_train_gpu.sh
# shell script for training on GPU
│ ├── run_eval_cpu.sh
# shell script for evaluation on CPU
│ ├── run_train_gpu.sh
# shell script for training on GPU
│ └── run_train_cpu.sh
# shell script for training on CPU
├── src
├── src
│ ├── config.py
# parameter configuration
│ ├── config.py
# parameter configuration
│ ├── dataset.py
# dataset preprocess
│ ├── dataset.py
# dataset preprocess
│ ├── imdb.py
# imdb dataset read script
│ ├── imdb.py
# imdb dataset read script
│ └── lstm.py
# Sentiment model
│ └── lstm.py
# Sentiment model
├── eval.py
# evaluation script
├── eval.py
# evaluation script
on both GPU and CPU
└── train.py
# training script
└── train.py
# training script
on both GPU and CPU
```
```
...
@@ -154,60 +166,89 @@ config.py:
...
@@ -154,60 +166,89 @@ config.py:
-
Set options in
`config.py`
, including learning rate and network hyperparameters.
-
Set options in
`config.py`
, including learning rate and network hyperparameters.
-
Run
`sh run_train_gpu.sh`
for training.
-
runing on GPU
Run
`sh run_train_gpu.sh`
for training.
```
bash
bash run_train_gpu.sh 0 ./aclimdb ./glove_dir
```
The above shell script will run distribute training in the background. You will get the loss value as following:
```
shell
# grep "loss is " log.txt
epoch: 1 step: 390, loss is 0.6003723
epcoh: 2 step: 390, loss is 0.35312173
...
```
-
runing on CPU
``` bash
Run
`sh run_train_cpu.sh`
for training.
bash run_train_gpu.sh 0 ./aclimdb ./glove_dir
```
The above shell script will run distribute training in the background. You will get the loss value as following:
```
bash
```shell
bash run_train_cpu.sh ./aclimdb ./glove_dir
# grep "loss is " log.txt
```
epoch: 1 step: 390, loss is 0.6003723
epcoh: 2 step: 390, loss is 0.35312173
The above shell script will train in the background. You will get the loss value as following:
...
```
```
shell
# grep "loss is " log.txt
epoch: 1 step: 390, loss is 0.6003723
epcoh: 2 step: 390, loss is 0.35312173
...
```
## [Evaluation Process](#contents)
## [Evaluation Process](#contents)
-
Run
`bash run_eval_gpu.sh`
for evaluation.
-
evaluation on GPU
Run
`bash run_eval_gpu.sh`
for evaluation.
``` bash
```
bash
bash run_eval_gpu.sh 0 ./aclimdb ./glove_dir lstm-20_390.ckpt
bash run_eval_gpu.sh 0 ./aclimdb ./glove_dir lstm-20_390.ckpt
```
```
-
evaluation on CPU
Run
`bash run_eval_cpu.sh`
for evaluation.
```
bash
bash run_eval_cpu.sh ./aclimdb ./glove_dir lstm-20_390.ckpt
```
# [Model Description](#contents)
# [Model Description](#contents)
## [Performance](#contents)
## [Performance](#contents)
### Training Performance
### Training Performance
| Parameters | LSTM
|
| Parameters | LSTM
(GPU) | LSTM (CPU)
|
| -------------------------- | -------------------------------------------------------------- |
| -------------------------- | -------------------------------------------------------------- |
-------------------------- |
| Resource | Tesla V100-SMX2-16GB |
| Resource | Tesla V100-SMX2-16GB |
Ubuntu X86-i7-8565U-16GB |
| uploaded Date | 08/06/2020 (month/day/year) |
| uploaded Date | 08/06/2020 (month/day/year) |
08/06/2020 (month/day/year)|
| MindSpore Version | 0.6.0-beta |
| MindSpore Version | 0.6.0-beta |
0.6.0-beta |
| Dataset | aclimdb_v1 |
| Dataset | aclimdb_v1 |
aclimdb_v1 |
| Training Parameters | epoch=20, batch_size=64 |
| Training Parameters | epoch=20, batch_size=64 |
epoch=20, batch_size=64 |
| Optimizer | Momentum |
| Optimizer | Momentum |
Momentum |
| Loss Function | Softmax Cross Entropy |
| Loss Function | Softmax Cross Entropy |
Softmax Cross Entropy |
| Speed | 1022 (1pcs) |
| Speed | 1022 (1pcs) |
20 |
| Loss | 0.12 |
| Loss | 0.12 |
0.12 |
| Params (M) | 6.45 |
| Params (M) | 6.45 |
6.45 |
| Checkpoint for inference | 292.9M (.ckpt file) |
| Checkpoint for inference | 292.9M (.ckpt file) |
292.9M (.ckpt file) |
| Scripts |
https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/nlp/lstm
|
| Scripts |
[
lstm script
](
https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/nlp/lstm
)
|
[
lstm script
](
https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/nlp/lstm
)
|
### Evaluation Performance
### Evaluation Performance
| Parameters | LSTM
|
| Parameters | LSTM
(GPU) | LSTM (CPU)
|
| ------------------- | --------------------------- |
| ------------------- | --------------------------- |
---------------------------- |
| Resource | Tesla V100-SMX2-16GB |
| Resource | Tesla V100-SMX2-16GB |
Ubuntu X86-i7-8565U-16GB |
| uploaded Date | 08/06/2020 (month/day/year) |
| uploaded Date | 08/06/2020 (month/day/year) |
08/06/2020 (month/day/year) |
| MindSpore Version | 0.6.0-beta |
| MindSpore Version | 0.6.0-beta |
0.6.0-beta |
| Dataset | aclimdb_v1 |
| Dataset | aclimdb_v1 |
aclimdb_v1 |
| batch_size | 64 |
| batch_size | 64 |
64 |
| Accuracy | 84% |
| Accuracy | 84% |
83% |
# [Description of Random Situation](#contents)
# [Description of Random Situation](#contents)
...
...
model_zoo/official/nlp/lstm/script/run_eval_cpu.sh
0 → 100644
浏览文件 @
be13688e
#!/bin/bash
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
echo
"=============================================================================================================="
echo
"Please run the scipt as: "
echo
"bash run_eval_cpu.sh ACLIMDB_DIR GLOVE_DIR CKPT_FILE"
echo
"for example: bash run_eval_cpu.sh ./aclimdb ./glove_dir lstm-20_390.ckpt"
echo
"=============================================================================================================="
ACLIMDB_DIR
=
$1
GLOVE_DIR
=
$2
CKPT_FILE
=
$3
mkdir
-p
ms_log
CUR_DIR
=
`
pwd
`
export
GLOG_log_dir
=
${
CUR_DIR
}
/ms_log
export
GLOG_logtostderr
=
0
python eval.py
\
--device_target
=
"CPU"
\
--aclimdb_path
=
$ACLIMDB_DIR
\
--glove_path
=
$GLOVE_DIR
\
--preprocess
=
false
\
--preprocess_path
=
./preprocess
\
--ckpt_path
=
$CKPT_FILE
>
log.txt 2>&1 &
model_zoo/official/nlp/lstm/script/run_train_cpu.sh
0 → 100644
浏览文件 @
be13688e
#!/bin/bash
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
echo
"=============================================================================================================="
echo
"Please run the scipt as: "
echo
"bash run_train_cpu.sh ACLIMDB_DIR GLOVE_DIR"
echo
"for example: bash run_train_gpu.sh ./aclimdb ./glove_dir"
echo
"=============================================================================================================="
ACLIMDB_DIR
=
$1
GLOVE_DIR
=
$2
mkdir
-p
ms_log
CUR_DIR
=
`
pwd
`
export
GLOG_log_dir
=
${
CUR_DIR
}
/ms_log
export
GLOG_logtostderr
=
0
python train.py
\
--device_target
=
"CPU"
\
--aclimdb_path
=
$ACLIMDB_DIR
\
--glove_path
=
$GLOVE_DIR
\
--preprocess
=
true
\
--preprocess_path
=
./preprocess
>
log.txt 2>&1 &
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录