Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
机器未来
Paddle
提交
92ab6148
P
Paddle
项目概览
机器未来
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
92ab6148
编写于
2月 13, 2017
作者:
Y
Yi Wang
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Add Event Handler
上级
3529c6c3
变更
1
显示空白变更内容
内联
并排
Showing
1 changed file
with
30 addition
and
3 deletion
+30
-3
doc/design/api.md
doc/design/api.md
+30
-3
未找到文件。
doc/design/api.md
浏览文件 @
92ab6148
...
...
@@ -163,7 +163,7 @@ There are some open questions here:
feed a topology with more data layers?
**
##
#
Training
## Training
The recommended way to training a model is to call
`paddle.train`
,
which simply calls
`paddle.trainer.Default`
, a global variable of
...
...
@@ -171,15 +171,42 @@ type `paddle.trainer.SGD`. Equivalently, we can do
```
python
opt
=
paddle
.
trainer
.
SGD
(...,
paddle
.
updater
.
Adam
(...))
opt
.
train
(
model
,
reader
=
read
,
...)
opt
.
train
(
topology
,
parameters
,
reader
=
read
,
...)
```
### Updater
Please be aware that a trainer requires an updater as its data
member. This is to make it easier to customize trainers, as
discussed
[
here
](
https://github.com/PaddlePaddle/Paddle/issues/1319
)
.
### Event Handler
`paddle.train`
and
`paddle.trainer.XXX.train`
take an optional
parameter
`event_handler`
, which should be either
`None`
or a function
that handle some events:
1.
BeginTraining
1.
EndTraining
1.
BeginMinibatch
1.
EndMinibatch
1.
BeginPass
1.
EndPass
where EndPass is sent if and only if the reader yields
`end_pass=True`
.
An example as follows:
```
python
def
event_handler
(
event
):
if
ininstance
(
event
,
paddle
.
event
.
EndMinibatch
):
print
paddle
.
test
(...)
paddle
.
train
(
topology
,
parameters
,
reader
,
event_handler
)
```
###
#
Distributed Training
### Distributed Training
If users want to do distributed training on a cluster, s/he should
call
`paddle.dist_train`
and provides access tokens to the cluster as
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录