Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
book
提交
7fbabdd6
B
book
项目概览
PaddlePaddle
/
book
通知
16
Star
4
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
40
列表
看板
标记
里程碑
合并请求
37
Wiki
5
Wiki
分析
仓库
DevOps
项目成员
Pages
B
book
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
40
Issue
40
列表
看板
标记
里程碑
合并请求
37
合并请求
37
Pages
分析
分析
仓库分析
DevOps
Wiki
5
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
7fbabdd6
编写于
6月 07, 2018
作者:
D
daming-lu
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
make changes based on Jupyter user experience
上级
64dda6c5
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
14 addition
and
8 deletion
+14
-8
04.word2vec/README.md
04.word2vec/README.md
+13
-7
04.word2vec/train.py
04.word2vec/train.py
+1
-1
未找到文件。
04.word2vec/README.md
浏览文件 @
7fbabdd6
...
...
@@ -218,6 +218,10 @@ Our program starts with importing necessary packages:
import
paddle
import
paddle.fluid
as
fluid
import
numpy
from
functools
import
partial
import
math
import
os
import
sys
```
-
Configure parameters and build word dictionary.
...
...
@@ -300,6 +304,12 @@ def train_program(is_sparse):
`event_handler`
can be passed into
`trainer.train`
so that we can do some tasks after each step or epoch. These tasks include recording current metrics or terminate current training process.
```
python
def
optimizer_func
():
return
fluid
.
optimizer
.
AdagradOptimizer
(
learning_rate
=
3e-3
,
regularization
=
fluid
.
regularizer
.
L2DecayRegularizer
(
8e-4
))
def
train
(
use_cuda
,
train_program
,
params_dirname
):
train_reader
=
paddle
.
batch
(
paddle
.
dataset
.
imikolov
.
train
(
word_dict
,
N
),
BATCH_SIZE
)
...
...
@@ -317,10 +327,10 @@ def train(use_cuda, train_program, params_dirname):
# We output cost every 10 steps.
if
event
.
step
%
10
==
0
:
print
"Step %d: Average Cost %f"
%
(
event
.
step
,
event
.
cost
)
print
"Step %d: Average Cost %f"
%
(
event
.
step
,
avg_
cost
)
# If average cost is lower than 5.0, we consider the model good enough to stop.
if
avg_cost
<
5.
5
:
if
avg_cost
<
5.
8
:
trainer
.
save_params
(
params_dirname
)
trainer
.
stop
()
...
...
@@ -333,10 +343,7 @@ def train(use_cuda, train_program, params_dirname):
# such as AdaGrad with a decay rate. The normal SGD converges
# very slowly.
# optimizer=fluid.optimizer.SGD(learning_rate=0.001),
optimizer
=
fluid
.
optimizer
.
AdagradOptimizer
(
learning_rate
=
3e-3
,
regularization
=
fluid
.
regularizer
.
L2DecayRegularizer
(
8e-4
)
),
optimizer_func
=
optimizer_func
,
place
=
place
)
trainer
.
train
(
...
...
@@ -414,7 +421,6 @@ When we spent 30 mins in training, the output is like below, which means the nex
The main entrance of the program is fairly simple:
```
python
def
main
(
use_cuda
,
is_sparse
):
if
use_cuda
and
not
fluid
.
core
.
is_compiled_with_cuda
():
return
...
...
04.word2vec/train.py
浏览文件 @
7fbabdd6
...
...
@@ -107,7 +107,7 @@ def train(use_cuda, train_program, params_dirname):
if
event
.
step
%
10
==
0
:
print
"Step %d: Average Cost %f"
%
(
event
.
step
,
avg_cost
)
if
avg_cost
<
5.
5
:
if
avg_cost
<
5.
8
:
trainer
.
save_params
(
params_dirname
)
trainer
.
stop
()
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录