Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
ea60e644
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
ea60e644
编写于
7月 24, 2020
作者:
M
mapingshuo
提交者:
GitHub
7月 24, 2020
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
correct the LookaheadOptimizer programDesc, test=develop (#25688)
上级
b5f8784c
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
46 addition
and
42 deletion
+46
-42
python/paddle/fluid/optimizer.py
python/paddle/fluid/optimizer.py
+46
-42
未找到文件。
python/paddle/fluid/optimizer.py
浏览文件 @
ea60e644
...
...
@@ -4884,48 +4884,52 @@ class LookaheadOptimizer(object):
inputs
=
{
"X"
:
fast_var
},
outputs
=
{
"Out"
:
slow_var
})
# Add Var k to main prog and startup prog
k
=
layers
.
create_global_var
(
name
=
"lookahead_k"
,
shape
=
[
1
],
value
=
int
(
self
.
k
),
dtype
=
'int32'
,
persistable
=
True
)
with
framework
.
program_guard
(
main_block
.
program
,
startup_program
):
# Add Var k to main prog and startup prog
k
=
layers
.
create_global_var
(
name
=
"lookahead_k"
,
shape
=
[
1
],
value
=
int
(
self
.
k
),
dtype
=
'int32'
,
persistable
=
True
)
# Add Var alpha to main prog and startup prog
alpha
=
layers
.
create_global_var
(
name
=
"lookahead_alpha"
,
shape
=
[
1
],
value
=
float
(
self
.
alpha
),
dtype
=
'float32'
,
persistable
=
True
)
# Add Var alpha to main prog and startup prog
alpha
=
layers
.
create_global_var
(
name
=
"lookahead_alpha"
,
shape
=
[
1
],
value
=
float
(
self
.
alpha
),
dtype
=
'float32'
,
persistable
=
True
)
# Add Var step
step
=
layers
.
create_global_var
(
name
=
"lookahead_step"
,
shape
=
[
1
],
value
=
int
(
0
),
dtype
=
'int32'
,
persistable
=
True
)
layers
.
increment
(
x
=
step
,
value
=
1.0
,
in_place
=
True
)
# lookahead
zero_var
=
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'float32'
,
value
=
0.0
)
one_var
=
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'float32'
,
value
=
1.0
)
mod
=
layers
.
elementwise_mod
(
step
,
k
)
with
layers
.
control_flow
.
Switch
()
as
switch
:
with
switch
.
case
(
mod
==
zero_var
):
for
param_name
in
params
:
fast_var
=
main_block
.
var
(
param_name
)
slow_var
=
param_to_slow
[
param_name
]
tmp_var
=
layers
.
elementwise_add
(
layers
.
elementwise_mul
(
fast_var
,
alpha
),
layers
.
elementwise_mul
(
slow_var
,
layers
.
elementwise_sub
(
one_var
,
alpha
)))
layers
.
assign
(
input
=
tmp_var
,
output
=
slow_var
)
layers
.
assign
(
input
=
tmp_var
,
output
=
fast_var
)
with
switch
.
default
():
pass
# Add Var step
step
=
layers
.
create_global_var
(
name
=
"lookahead_step"
,
shape
=
[
1
],
value
=
int
(
0
),
dtype
=
'int32'
,
persistable
=
True
)
layers
.
increment
(
x
=
step
,
value
=
1.0
,
in_place
=
True
)
# lookahead
zero_var
=
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'float32'
,
value
=
0.0
)
one_var
=
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'float32'
,
value
=
1.0
)
mod
=
layers
.
elementwise_mod
(
step
,
k
)
with
layers
.
control_flow
.
Switch
()
as
switch
:
with
switch
.
case
(
mod
==
zero_var
):
for
param_name
in
params
:
fast_var
=
main_block
.
var
(
param_name
)
slow_var
=
param_to_slow
[
param_name
]
tmp_var
=
layers
.
elementwise_add
(
layers
.
elementwise_mul
(
fast_var
,
alpha
),
layers
.
elementwise_mul
(
slow_var
,
layers
.
elementwise_sub
(
one_var
,
alpha
)))
layers
.
assign
(
input
=
tmp_var
,
output
=
slow_var
)
layers
.
assign
(
input
=
tmp_var
,
output
=
fast_var
)
with
switch
.
default
():
pass
return
mini_out
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录