Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Crayon鑫
Paddle
提交
f3ac4d8e
P
Paddle
项目概览
Crayon鑫
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
f3ac4d8e
编写于
10月 27, 2017
作者:
A
Abhinav Arora
提交者:
GitHub
10月 27, 2017
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Adding L1 Decay Regularizer (#5173)
上级
9ecebb2d
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
77 addition
and
1 deletion
+77
-1
python/paddle/v2/framework/regularizer.py
python/paddle/v2/framework/regularizer.py
+43
-1
python/paddle/v2/framework/tests/test_regularizer.py
python/paddle/v2/framework/tests/test_regularizer.py
+34
-0
未找到文件。
python/paddle/v2/framework/regularizer.py
浏览文件 @
f3ac4d8e
import
paddle.v2.framework.framework
as
framework
__all__
=
[
'append_regularization_ops'
,
'L2DecayRegularizer'
]
__all__
=
[
'append_regularization_ops'
,
'L2DecayRegularizer'
,
'L1DecayRegularizer'
]
def
append_regularization_ops
(
parameters_and_grads
):
...
...
@@ -97,3 +99,43 @@ class L2DecayRegularizer(WeightDecayRegularizer):
attrs
=
{
"scale"
:
self
.
_regularization_coeff
})
return
decay
class
L1DecayRegularizer
(
WeightDecayRegularizer
):
"""Implements the L1 Weight Decay Regularization
"""
def
__init__
(
self
,
regularization_coeff
=
0.0
):
assert
regularization_coeff
is
not
None
super
(
L1DecayRegularizer
,
self
).
__init__
()
self
.
_regularization_coeff
=
regularization_coeff
def
__call__
(
self
,
param
,
block
):
"""Add L1 weight decay ops to network
Adds L1 weight decay ops.
L1WeightDecay = reg_coeff * sign(parameter)
Args:
param: parameter variable for which regularization is applied
block: block in which variable is to be created
Returns:
new variable for weight decay
"""
assert
isinstance
(
param
,
framework
.
Parameter
)
assert
isinstance
(
block
,
framework
.
Block
)
decay
=
block
.
create_var
(
dtype
=
"float32"
,
shape
=
param
.
shape
,
lod_level
=
param
.
lod_level
)
# Append sign op
block
.
append_op
(
type
=
'sign'
,
inputs
=
{
"X"
:
param
},
outputs
=
{
"Out"
:
decay
})
# Append scale op to the output of sign op
block
.
append_op
(
type
=
'scale'
,
inputs
=
{
"X"
:
decay
},
outputs
=
{
"Out"
:
decay
},
attrs
=
{
"scale"
:
self
.
_regularization_coeff
})
return
decay
python/paddle/v2/framework/tests/test_regularizer.py
浏览文件 @
f3ac4d8e
...
...
@@ -39,5 +39,39 @@ class TestL2DecayRegularizer(unittest.TestCase):
self
.
assertEqual
(
block
.
ops
[
-
2
].
type
,
'scale'
)
class
TestL1DecayRegularizer
(
unittest
.
TestCase
):
def
test_l2decay_regularizer
(
self
):
program
=
framework
.
Program
()
block
=
program
.
global_block
()
mul_x
=
block
.
create_parameter
(
dtype
=
"float32"
,
shape
=
[
5
,
10
],
lod_level
=
0
,
name
=
"mul.x"
,
regularizer
=
regularizer
.
L1DecayRegularizer
(
0.5
))
self
.
assertTrue
(
mul_x
.
regularizer
is
not
None
)
self
.
assertTrue
(
isinstance
(
mul_x
.
regularizer
,
regularizer
.
L1DecayRegularizer
))
mul_y
=
block
.
create_var
(
dtype
=
"float32"
,
shape
=
[
10
,
8
],
lod_level
=
0
,
name
=
"mul.y"
)
mul_out
=
block
.
create_var
(
dtype
=
"float32"
,
shape
=
[
5
,
8
],
lod_level
=
0
,
name
=
"mul.out"
)
block
.
append_op
(
type
=
"mul"
,
inputs
=
{
"X"
:
mul_x
,
"Y"
:
mul_y
},
outputs
=
{
"Out"
:
mul_out
},
attrs
=
{
"x_num_col_dims"
:
1
})
params_grads
=
append_backward_ops
(
mul_out
)
self
.
assertEqual
(
len
(
params_grads
),
1
)
count_ops
=
len
(
block
.
ops
)
params_grads
=
optimizer
.
append_regularization_ops
(
params_grads
)
self
.
assertEqual
(
len
(
params_grads
),
1
)
self
.
assertEqual
(
len
(
block
.
ops
),
count_ops
+
3
)
self
.
assertEqual
(
block
.
ops
[
-
1
].
type
,
'elementwise_add'
)
self
.
assertEqual
(
block
.
ops
[
-
2
].
type
,
'scale'
)
self
.
assertEqual
(
block
.
ops
[
-
3
].
type
,
'sign'
)
if
__name__
==
'__main__'
:
unittest
.
main
()
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录