Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
Paddle
提交
03f4beb8
P
Paddle
项目概览
PaddlePaddle
/
Paddle
大约 1 年 前同步成功
通知
2299
Star
20931
Fork
5422
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1423
列表
看板
标记
里程碑
合并请求
543
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1,423
Issue
1,423
列表
看板
标记
里程碑
合并请求
543
合并请求
543
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
03f4beb8
编写于
6月 18, 2018
作者:
Q
qiaolongfei
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
add doc for ErrorClipByValue GradientClipByValue and GradientClipByGlobalNorm
上级
e3578ab1
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
85 addition
and
11 deletion
+85
-11
python/paddle/fluid/clip.py
python/paddle/fluid/clip.py
+85
-11
未找到文件。
python/paddle/fluid/clip.py
浏览文件 @
03f4beb8
...
@@ -24,8 +24,6 @@ __all__ = [
...
@@ -24,8 +24,6 @@ __all__ = [
'GradientClipByValue'
,
'GradientClipByValue'
,
'GradientClipByNorm'
,
'GradientClipByNorm'
,
'GradientClipByGlobalNorm'
,
'GradientClipByGlobalNorm'
,
'append_gradient_clip_ops'
,
'error_clip_callback'
,
]
]
...
@@ -38,6 +36,25 @@ class BaseErrorClipAttr(object):
...
@@ -38,6 +36,25 @@ class BaseErrorClipAttr(object):
class
ErrorClipByValue
(
BaseErrorClipAttr
):
class
ErrorClipByValue
(
BaseErrorClipAttr
):
"""
Clips tensor values to the range [min, max].
Given a tensor t, this operation clips its value to min and max inplace.
- Any values less than min are set to min.
- Any values greater than max are set to max.
Args:
max (float): The maximum value to clip by.
min (float, optional): The minimum value to clip by. if not set by user,
\
will be set to -max by framework.
Examples:
.. code-block:: python
var = fluid.framework.Variable(..., error_clip=ErrorClipByValue(max=5.0), ...)
"""
def
__init__
(
self
,
max
,
min
=
None
):
def
__init__
(
self
,
max
,
min
=
None
):
max
=
float
(
max
)
max
=
float
(
max
)
if
min
is
None
:
if
min
is
None
:
...
@@ -99,6 +116,31 @@ class NullGradientClipAttr(BaseGradientClipAttr):
...
@@ -99,6 +116,31 @@ class NullGradientClipAttr(BaseGradientClipAttr):
class
GradientClipByValue
(
BaseGradientClipAttr
):
class
GradientClipByValue
(
BaseGradientClipAttr
):
"""
Clips gradient values to the range [min, max].
Given a tensor t, this operation clips its value to min and max inplace.
- Any values less than min are set to min.
- Any values greater than max are set to max.
Args:
max (float): The maximum value to clip by.
min (float, optional): The minimum value to clip by. if not set by user,
\
will be set to -max by framework.
Examples:
.. code-block:: python
w_param_attrs = ParamAttr(name=None,
initializer=UniformInitializer(low=-1.0, high=1.0, seed=0),
learning_rate=1.0,
regularizer=L1Decay(1.0),
trainable=True,
clip=GradientClipByValue(-1.0, 1.0))
y_predict = fluid.layers.fc(input=x, size=1, param_attr=w_param_attrs)
"""
def
__init__
(
self
,
max
,
min
=
None
):
def
__init__
(
self
,
max
,
min
=
None
):
max
=
float
(
max
)
max
=
float
(
max
)
if
min
is
None
:
if
min
is
None
:
...
@@ -120,6 +162,37 @@ class GradientClipByValue(BaseGradientClipAttr):
...
@@ -120,6 +162,37 @@ class GradientClipByValue(BaseGradientClipAttr):
class
GradientClipByNorm
(
BaseGradientClipAttr
):
class
GradientClipByNorm
(
BaseGradientClipAttr
):
"""
Clips tensor values to a maximum L2-norm.
This operator limits the L2 norm of the input :math:`X` within :math:`max\_norm`.
If the L2 norm of :math:`X` is less than or equal to :math:`max\_norm`, :math:`Out`
will be the same as :math:`X`. If the L2 norm of :math:`X` is greater than
:math:`max\_norm`, :math:`X` will be linearly scaled to make the L2 norm of
:math:`Out` equal to :math:`max\_norm`, as shown in the following formula:
.. math::
Out =
\\
frac{max\_norm * X}{norm(X)},
where :math:`norm(X)` represents the L2 norm of :math:`X`.
Args:
clip_norm (float): The maximum norm value
Examples:
.. code-block:: python
w_param_attrs = ParamAttr(name=None,
initializer=UniformInitializer(low=-1.0, high=1.0, seed=0),
learning_rate=1.0,
regularizer=L1Decay(1.0),
trainable=True,
clip=GradientClipByNorm(clip_norm=2.0))
y_predict = fluid.layers.fc(input=x, size=1, param_attr=w_param_attrs)
"""
def
__init__
(
self
,
clip_norm
):
def
__init__
(
self
,
clip_norm
):
self
.
clip_norm
=
clip_norm
self
.
clip_norm
=
clip_norm
...
@@ -183,15 +256,16 @@ class GradientClipByGlobalNorm(BaseGradientClipAttr):
...
@@ -183,15 +256,16 @@ class GradientClipByGlobalNorm(BaseGradientClipAttr):
def
set_gradient_clip
(
clip
,
param_list
=
None
,
program
=
None
):
def
set_gradient_clip
(
clip
,
param_list
=
None
,
program
=
None
):
"""
"""
To specify parameters that require gradient clip.
To specify parameters that require gradient clip.
Args:
clip(BaseGradientClipAttr): An instance of some derived class of BaseGradientClipAttr,
Args:
which describes the type and detailed attributes of required gradient clip.
clip(BaseGradientClipAttr): An instance of some derived class of BaseGradientClipAttr,
param_list(list, None by default): Parameters that require gradient clip.
which describes the type and detailed attributes of required gradient clip.
It can be a list of parameter or a list of parameter's name.
param_list(list(Variable)): Parameters that require gradient clip.
When it's None, all parameters in the program will be included.
It can be a list of parameter or a list of parameter's name.
program(Program, None by default): The program where parameters are.
When it's None, all parameters in the program will be included.
Will be the default main program when assigned with None.
program(Program): The program where parameters are.
Will be the default main program when assigned with None.
"""
"""
if
not
isinstance
(
clip
,
BaseGradientClipAttr
):
if
not
isinstance
(
clip
,
BaseGradientClipAttr
):
raise
TypeError
(
raise
TypeError
(
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录