Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
679a4c28
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
679a4c28
编写于
3月 27, 2019
作者:
W
whs
提交者:
GitHub
3月 27, 2019
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fix lost of learning rate variable in distillatoin when using lr decay. (#16471)
test=develop
上级
57dc3c19
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
21 addition
and
4 deletion
+21
-4
python/paddle/fluid/contrib/slim/distillation/distillation_strategy.py
.../fluid/contrib/slim/distillation/distillation_strategy.py
+12
-3
python/paddle/fluid/contrib/slim/graph/graph_wrapper.py
python/paddle/fluid/contrib/slim/graph/graph_wrapper.py
+6
-0
python/paddle/fluid/contrib/slim/tests/test_distillation_strategy.py
...le/fluid/contrib/slim/tests/test_distillation_strategy.py
+3
-1
未找到文件。
python/paddle/fluid/contrib/slim/distillation/distillation_strategy.py
浏览文件 @
679a4c28
...
...
@@ -13,7 +13,7 @@
# limitations under the License.
from
..core.strategy
import
Strategy
from
....framework
import
Program
,
program_guard
from
....framework
import
Program
,
Variable
,
program_guard
from
....
import
Executor
import
logging
...
...
@@ -74,8 +74,17 @@ class DistillationStrategy(Strategy):
startup_program
=
Program
()
with
program_guard
(
graph
.
program
,
startup_program
):
context
.
distiller_optimizer
.
_name
=
'distillation_optimizer'
context
.
distiller_optimizer
.
minimize
(
graph
.
var
(
graph
.
out_nodes
[
'loss'
]).
_var
)
# The learning rate variable may be created in other program.
# Update information in optimizer to make
# learning rate variable being accessible in current program.
optimizer
=
context
.
distiller_optimizer
if
isinstance
(
optimizer
.
_learning_rate
,
Variable
):
optimizer
.
_learning_rate_map
[
graph
.
program
]
=
optimizer
.
_learning_rate
optimizer
.
minimize
(
graph
.
var
(
graph
.
out_nodes
[
'loss'
]).
_var
)
exe
=
Executor
(
context
.
place
)
exe
.
run
(
startup_program
,
scope
=
context
.
scope
)
...
...
python/paddle/fluid/contrib/slim/graph/graph_wrapper.py
浏览文件 @
679a4c28
...
...
@@ -402,6 +402,12 @@ class GraphWrapper(object):
elif
'cost'
in
graph
.
out_nodes
:
target_name
=
graph
.
out_nodes
[
'cost'
]
target
=
graph
.
var
(
target_name
).
_var
# The learning rate variable may be created in other program.
# Update information in optimizer to make
# learning rate variable being accessible in current program.
if
isinstance
(
optimizer
.
_learning_rate
,
Variable
):
optimizer
.
_learning_rate_map
[
graph
.
program
]
=
optimizer
.
_learning_rate
optimizer
.
minimize
(
target
,
no_grad_set
=
no_grad_var_names
)
exe
=
Executor
(
place
)
...
...
python/paddle/fluid/contrib/slim/tests/test_distillation_strategy.py
浏览文件 @
679a4c28
...
...
@@ -41,9 +41,11 @@ class TestDistillationStrategy(unittest.TestCase):
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
out
,
label
=
label
)
avg_cost
=
fluid
.
layers
.
mean
(
x
=
cost
)
optimizer
=
fluid
.
optimizer
.
Momentum
(
momentum
=
0.9
,
learning_rate
=
0.01
,
learning_rate
=
fluid
.
layers
.
piecewise_decay
(
boundaries
=
[
5
,
10
],
values
=
[
0.01
,
0.001
,
0.0001
]),
regularization
=
fluid
.
regularizer
.
L2Decay
(
4e-5
))
place
=
fluid
.
CUDAPlace
(
0
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录