Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
机器未来
Paddle
提交
db209f48
P
Paddle
项目概览
机器未来
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
db209f48
编写于
11月 08, 2017
作者:
R
ranqiu
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Update annotations of layers.py
上级
f1fac487
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
107 addition
and
89 deletion
+107
-89
python/paddle/trainer_config_helpers/layers.py
python/paddle/trainer_config_helpers/layers.py
+107
-89
未找到文件。
python/paddle/trainer_config_helpers/layers.py
浏览文件 @
db209f48
...
@@ -5770,20 +5770,21 @@ def cross_entropy(input,
...
@@ -5770,20 +5770,21 @@ def cross_entropy(input,
:param input: The first input layer.
:param input: The first input layer.
:type input: LayerOutput.
:type input: LayerOutput.
:param label: The input label.
:param label: The input label.
:type input: LayerOutput
.
:type input: LayerOutput
:param name: The name of this layer. It is optional.
:param name: The name of this layer. It is optional.
:type name:
None | basestring.
:type name:
basestring
:param coeff: The
cost is multiplied with coeff
.
:param coeff: The
weight of the gradient in the back propagation
.
The coefficient affects the gradient in the backward
.
1.0 is the default
.
:type coeff: float
.
:type coeff: float
:param weight: The cost of each sample is multiplied with each weight.
:param weight: The cost of each sample is multiplied with each weight.
The weight should be a layer with size=1. Note that gradient
The weight should be a layer with size=1. Note that gradient
will not be calculated for weight.
will not be calculated for weight.
:type weight: LayerOutout
:type weight: LayerOutout
:param layer_attr: Extra Layer Attribute.
:param layer_attr: The extra layer attribute. See ExtraLayerAttribute for
details.
:type layer_attr: ExtraLayerAttribute
:type layer_attr: ExtraLayerAttribute
:return: LayerOutput object.
:return: LayerOutput object.
:rtype: LayerOutput
.
:rtype: LayerOutput
"""
"""
ipts
,
parents
=
__cost_input__
(
input
,
label
,
weight
)
ipts
,
parents
=
__cost_input__
(
input
,
label
,
weight
)
...
@@ -5816,19 +5817,21 @@ def cross_entropy_with_selfnorm(input,
...
@@ -5816,19 +5817,21 @@ def cross_entropy_with_selfnorm(input,
label=label_layer)
label=label_layer)
:param input: The first input layer.
:param input: The first input layer.
:type input: LayerOutput
.
:type input: LayerOutput
:param label: The input label.
:param label: The input label.
:type input: LayerOutput
.
:type input: LayerOutput
:param name: The name of this layer. It is optional.
:param name: The name of this layer. It is optional.
:type name: None | basestring.
:type name: basestring
:param coeff: The coefficient affects the gradient in the backward.
:param coeff: The weight of the gradient in the back propagation.
:type coeff: float.
1.0 is the default.
:type coeff: float
:param softmax_selfnorm_alpha: The scale factor affects the cost.
:param softmax_selfnorm_alpha: The scale factor affects the cost.
:type softmax_selfnorm_alpha: float.
:type softmax_selfnorm_alpha: float
:param layer_attr: Extra Layer Attribute.
:param layer_attr: The extra layer attribute. See ExtraLayerAttribute for
details.
:type layer_attr: ExtraLayerAttribute
:type layer_attr: ExtraLayerAttribute
:return: LayerOutput object.
:return: LayerOutput object.
:rtype: LayerOutput
.
:rtype: LayerOutput
"""
"""
Layer
(
Layer
(
name
=
name
,
name
=
name
,
...
@@ -5849,7 +5852,7 @@ def cross_entropy_with_selfnorm(input,
...
@@ -5849,7 +5852,7 @@ def cross_entropy_with_selfnorm(input,
@
layer_support
()
@
layer_support
()
def
sum_cost
(
input
,
name
=
None
,
layer_attr
=
None
):
def
sum_cost
(
input
,
name
=
None
,
layer_attr
=
None
):
"""
"""
A loss layer which calculate
the sum of the input as loss
A loss layer which calculate
s the sum of the input as loss.
The example usage is:
The example usage is:
...
@@ -5858,10 +5861,11 @@ def sum_cost(input, name=None, layer_attr=None):
...
@@ -5858,10 +5861,11 @@ def sum_cost(input, name=None, layer_attr=None):
cost = sum_cost(input=input_layer)
cost = sum_cost(input=input_layer)
:param input: The input of this layer.
:param input: The input of this layer.
:type input: LayerOutput
.
:type input: LayerOutput
:param name: The name of this layer. It is optional.
:param name: The name of this layer. It is optional.
:type name: None | basestring.
:type name: basestring
:param layer_attr: Extra Layer Attribute.
:param layer_attr: The extra layer attribute. See ExtraLayerAttribute for
details.
:type layer_attr: ExtraLayerAttribute
:type layer_attr: ExtraLayerAttribute
:return: LayerOutput object.
:return: LayerOutput object.
:rtype: LayerOutput.
:rtype: LayerOutput.
...
@@ -5901,16 +5905,18 @@ def huber_regression_cost(input,
...
@@ -5901,16 +5905,18 @@ def huber_regression_cost(input,
cost = huber_regression_cost(input=input_layer, label=label_layer)
cost = huber_regression_cost(input=input_layer, label=label_layer)
:param input: The first input layer.
:param input: The first input layer.
:type input: LayerOutput
.
:type input: LayerOutput
:param label: The input label.
:param label: The input label.
:type input: LayerOutput
.
:type input: LayerOutput
:param name: The name of this layer. It is optional.
:param name: The name of this layer. It is optional.
:type name:
None | basestring.
:type name:
basestring
:param delta: The difference between the observed and predicted values.
:param delta: The difference between the observed and predicted values.
:type delta: float.
:type delta: float
:param coeff: The coefficient affects the gradient in the backward.
:param coeff: The weight of the gradient in the back propagation.
:type coeff: float.
1.0 is the default.
:param layer_attr: Extra Layer Attribute.
:type coeff: float
:param layer_attr: The extra layer attribute. See ExtraLayerAttribute for
details.
:type layer_attr: ExtraLayerAttribute
:type layer_attr: ExtraLayerAttribute
:return: LayerOutput object.
:return: LayerOutput object.
:rtype: LayerOutput.
:rtype: LayerOutput.
...
@@ -5951,17 +5957,19 @@ def huber_classification_cost(input,
...
@@ -5951,17 +5957,19 @@ def huber_classification_cost(input,
cost = huber_classification_cost(input=input_layer, label=label_layer)
cost = huber_classification_cost(input=input_layer, label=label_layer)
:param input: The first input layer.
:param input: The first input layer.
:type input: LayerOutput
.
:type input: LayerOutput
:param label: The input label.
:param label: The input label.
:type input: LayerOutput
.
:type input: LayerOutput
:param name: The name of this layer. It is optional.
:param name: The name of this layer. It is optional.
:type name: None | basestring.
:type name: basestring
:param coeff: The coefficient affects the gradient in the backward.
:param coeff: The weight of the gradient in the back propagation.
:type coeff: float.
1.0 is the default.
:param layer_attr: Extra Layer Attribute.
:type coeff: float
:param layer_attr: The extra layer attribute. See ExtraLayerAttribute for
details.
:type layer_attr: ExtraLayerAttribute
:type layer_attr: ExtraLayerAttribute
:return: LayerOutput object.
:return: LayerOutput object.
:rtype: LayerOutput
.
:rtype: LayerOutput
"""
"""
assert
isinstance
(
input
,
LayerOutput
)
assert
isinstance
(
input
,
LayerOutput
)
if
input
.
size
is
not
None
:
if
input
.
size
is
not
None
:
...
@@ -5998,10 +6006,12 @@ def multi_binary_label_cross_entropy(input,
...
@@ -5998,10 +6006,12 @@ def multi_binary_label_cross_entropy(input,
:param label: The input label.
:param label: The input label.
:type input: LayerOutput
:type input: LayerOutput
:param name: The name of this layer. It is optional.
:param name: The name of this layer. It is optional.
:type name: None | basestring
:type name: basestring
:param coeff: The coefficient affects the gradient in the backward.
:param coeff: The weight of the gradient in the back propagation.
1.0 is the default.
:type coeff: float
:type coeff: float
:param layer_attr: Extra Layer Attribute.
:param layer_attr: The extra layer attribute. See ExtraLayerAttribute for
details.
:type layer_attr: ExtraLayerAttribute
:type layer_attr: ExtraLayerAttribute
:return: LayerOutput object.
:return: LayerOutput object.
:rtype: LayerOutput
:rtype: LayerOutput
...
@@ -6104,7 +6114,7 @@ def cross_entropy_over_beam(input, name=None):
...
@@ -6104,7 +6114,7 @@ def cross_entropy_over_beam(input, name=None):
:param input: Input beams for this layer.
:param input: Input beams for this layer.
:type input: BeamInput
:type input: BeamInput
:param name: The name of this layer.
:param name: The name of this layer.
It is optional.
:type name: basestring
:type name: basestring
:return: LayerOutput object.
:return: LayerOutput object.
:rtype: LayerOutput
:rtype: LayerOutput
...
@@ -6139,7 +6149,7 @@ def cross_entropy_over_beam(input, name=None):
...
@@ -6139,7 +6149,7 @@ def cross_entropy_over_beam(input, name=None):
def
smooth_l1_cost
(
input
,
label
,
name
=
None
,
coeff
=
1.0
,
layer_attr
=
None
):
def
smooth_l1_cost
(
input
,
label
,
name
=
None
,
coeff
=
1.0
,
layer_attr
=
None
):
"""
"""
This is a L1 loss but more smooth. It requires that the
This is a L1 loss but more smooth. It requires that the
size of input and label are equal. The formula is as follows,
size
s
of input and label are equal. The formula is as follows,
.. math::
.. math::
...
@@ -6151,8 +6161,9 @@ def smooth_l1_cost(input, label, name=None, coeff=1.0, layer_attr=None):
...
@@ -6151,8 +6161,9 @@ def smooth_l1_cost(input, label, name=None, coeff=1.0, layer_attr=None):
smooth_{L1}(x) =
\\
begin{cases} 0.5x^2&
\\
text{if}
\\
|x| < 1
\\\\
|x|-0.5&
\\
text{otherwise} \end{cases}
smooth_{L1}(x) =
\\
begin{cases} 0.5x^2&
\\
text{if}
\\
|x| < 1
\\\\
|x|-0.5&
\\
text{otherwise} \end{cases}
More details can be found by referring to `Fast R-CNN
Reference:
<https://arxiv.org/pdf/1504.08083v2.pdf>`_
Fast R-CNN
https://arxiv.org/pdf/1504.08083v2.pdf
The example usage is:
The example usage is:
...
@@ -6166,10 +6177,11 @@ def smooth_l1_cost(input, label, name=None, coeff=1.0, layer_attr=None):
...
@@ -6166,10 +6177,11 @@ def smooth_l1_cost(input, label, name=None, coeff=1.0, layer_attr=None):
:param label: The input label.
:param label: The input label.
:type input: LayerOutput
:type input: LayerOutput
:param name: The name of this layer. It is optional.
:param name: The name of this layer. It is optional.
:type name:
None |
basestring
:type name: basestring
:param coeff: The coefficient affects the gradient in the backward.
:param coeff: The coefficient affects the gradient in the backward.
:type coeff: float
:type coeff: float
:param layer_attr: Extra Layer Attribute.
:param layer_attr: The extra layer attribute. See ExtraLayerAttribute for
details.
:type layer_attr: ExtraLayerAttribute
:type layer_attr: ExtraLayerAttribute
:return: LayerOutput object.
:return: LayerOutput object.
:rtype: LayerOutput
:rtype: LayerOutput
...
@@ -6191,12 +6203,12 @@ def smooth_l1_cost(input, label, name=None, coeff=1.0, layer_attr=None):
...
@@ -6191,12 +6203,12 @@ def smooth_l1_cost(input, label, name=None, coeff=1.0, layer_attr=None):
@
wrap_name_default
()
@
wrap_name_default
()
def
multiplex_layer
(
input
,
name
=
None
,
layer_attr
=
None
):
def
multiplex_layer
(
input
,
name
=
None
,
layer_attr
=
None
):
"""
"""
This layer multiplex multiple layers according to the index,
This layer multiplex multiple layers according to the index
es
,
which
is
provided by the first input layer.
which
are
provided by the first input layer.
inputs[0]: the index
of the layer to
output of size batchSize.
inputs[0]: the index
es of the layers to form the
output of size batchSize.
inputs[1:N]; the candidate output data.
inputs[1:N]; the candidate output data.
For each index i from 0 to batchSize -
1, the output is the i-th row of
the
For each index i from 0 to batchSize -
1, the i-th row of the output is
the
(index[i] + 1)-th layer.
the same to the i-th row of the
(index[i] + 1)-th layer.
For each i-th row of output:
For each i-th row of output:
.. math::
.. math::
...
@@ -6215,7 +6227,8 @@ def multiplex_layer(input, name=None, layer_attr=None):
...
@@ -6215,7 +6227,8 @@ def multiplex_layer(input, name=None, layer_attr=None):
:type input: list of LayerOutput
:type input: list of LayerOutput
:param name: The name of this layer. It is optional.
:param name: The name of this layer. It is optional.
:type name: basestring
:type name: basestring
:param layer_attr: extra layer attributes.
:param layer_attr: The extra layer attribute. See ExtraLayerAttribute for
details.
:type layer_attr: ExtraLayerAttribute.
:type layer_attr: ExtraLayerAttribute.
:return: LayerOutput object.
:return: LayerOutput object.
:rtype: LayerOutput
:rtype: LayerOutput
...
@@ -6319,14 +6332,14 @@ def row_conv_layer(input,
...
@@ -6319,14 +6332,14 @@ def row_conv_layer(input,
:type context_len: int
:type context_len: int
:param act: Activation Type. LinearActivation is the default.
:param act: Activation Type. LinearActivation is the default.
:type act: BaseActivation
:type act: BaseActivation
:param param_attr: The
Parameter Attribute. If None, the parameter will be
:param param_attr: The
parameter attribute. See ParameterAttribute for
initialized smartly. It's better to set it by yourself
.
details
.
:type param_attr: ParameterAttribute
:type param_attr: ParameterAttribute
:param layer_attr: Extra Layer config.
:param layer_attr: The extra layer attribute. See ExtraLayerAttribute for
details.
:type layer_attr: ExtraLayerAttribute | None
:type layer_attr: ExtraLayerAttribute | None
:return: LayerOutput object.
:return: LayerOutput object.
:rtype: LayerOutput
:rtype: LayerOutput
"""
"""
assert
isinstance
(
input
,
LayerOutput
)
assert
isinstance
(
input
,
LayerOutput
)
assert
context_len
>
0
,
"the context_len must be greatet than 0."
assert
context_len
>
0
,
"the context_len must be greatet than 0."
...
@@ -6351,7 +6364,7 @@ def prelu_layer(input,
...
@@ -6351,7 +6364,7 @@ def prelu_layer(input,
param_attr
=
None
,
param_attr
=
None
,
layer_attr
=
None
):
layer_attr
=
None
):
"""
"""
The Paramet
er
Relu activation that actives outputs with a learnable weight.
The Paramet
ric
Relu activation that actives outputs with a learnable weight.
Reference:
Reference:
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
...
@@ -6371,16 +6384,17 @@ def prelu_layer(input,
...
@@ -6371,16 +6384,17 @@ def prelu_layer(input,
:type name: basestring
:type name: basestring
:param input: The input of this layer.
:param input: The input of this layer.
:type input: LayerOutput
:type input: LayerOutput
:param partial_sum: this parameter makes a group of inputs share
a
same weight.
:param partial_sum: this parameter makes a group of inputs share
the
same weight.
- partial_sum = 1, indicates the element-wise activation: each element has a weight.
- partial_sum = 1, indicates the element-wise activation: each element has a weight.
- partial_sum = number of elements in one channel, indicates the channel-wise activation, elements in a channel share
a
same weight.
- partial_sum = number of elements in one channel, indicates the channel-wise activation, elements in a channel share
the
same weight.
- partial_sum = number of outputs, indicates all elements share
a
same weight.
- partial_sum = number of outputs, indicates all elements share
the
same weight.
:type partial_sum: int
:type partial_sum: int
:param param_attr: The parameter attribute. See ParameterAttribute for details.
:param param_attr: The parameter attribute. See ParameterAttribute for details.
:type param_attr: ParameterAttribute | None
:type param_attr: ParameterAttribute
:param layer_attr: Extra layer configurations. Default is None.
:param layer_attr: The extra layer attribute. See ExtraLayerAttribute for
details.
:type layer_attr: ExtraLayerAttribute | None
:type layer_attr: ExtraLayerAttribute | None
:return: LayerOutput object.
:return: LayerOutput object.
:rtype: LayerOutput
:rtype: LayerOutput
...
@@ -6436,34 +6450,36 @@ def gated_unit_layer(input,
...
@@ -6436,34 +6450,36 @@ def gated_unit_layer(input,
:param input: The input of this layer.
:param input: The input of this layer.
:type input: LayerOutput
:type input: LayerOutput
:param size:
output size of the gated uni
t.
:param size:
The dimemsion of this layer's outpu
t.
:type size: int
:type size: int
:param act: Activation type of the project
ed input
. LinearActivation is the default.
:param act: Activation type of the project
ion
. LinearActivation is the default.
:type act: BaseActivation
:type act: BaseActivation
:param name: The name of this layer. It is optional.
:param name: The name of this layer. It is optional.
:type name: basestring
:type name: basestring
:param gate_attr: Attributes to tune the gate output, for example, error
:param gate_attr: The extra layer attribute of the gate. See ExtraLayerAttribute for
clipping threshold, dropout and so on. See ExtraLayerAttribute for
details.
more details.
:type gate_attr: ExtraLayerAttribute | None
:type gate_attr: ExtraLayerAttribute | None
:param gate_param_attr: Attributes to tune the learnable projected matrix
:param gate_param_attr: The parameter attribute of the gate. See ParameterAttribute
parameter of the gate.
for details.
:type gate_param_attr: ParameterAttribute | None
:type gate_param_attr: ParameterAttribute
:param gate_bias_attr: Attributes to tune the learnable bias of the gate.
:param gate_bias_attr: The bias attribute of the gate. If the parameter is set to
:type gate_bias_attr: ParameterAttribute | None
False or something not type of ParameterAttribute, no bias is
:param inproj_attr: Attributes to the tune the projected input, for
defined. If the parameter is set to True, the bias is initialized
example, error clipping threshold, dropout and so on. See
to zero.
ExtraLayerAttribute for more details.
:type gate_bias_attr: ParameterAttribute | bool | None | Any
:param inproj_attr: Extra layer attributes of the projection. See ExtraLayerAttribute for
details.
:type inproj_attr: ExtraLayerAttribute | None
:type inproj_attr: ExtraLayerAttribute | None
:param inproj_param_attr: Attributes to tune the learnable parameter of
:param inproj_param_attr: The parameter attribute of the projection. See ParameterAttribute
the projection of input.
for details.
:type inproj_param_attr: ParameterAttribute | None
:type inproj_param_attr: ParameterAttribute
:param inproj_bias_attr: Attributes to tune the learnable bias of
:param inproj_bias_attr: The bias attribute of the projection. If the parameter is set to
projection of the input.
False or something not type of ParameterAttribute, no bias is
:type inproj_bias_attr: ParameterAttribute | None
defined. If the parameter is set to True, the bias is initialized
:param layer_attr: Attributes to tune the final output of the gated unit,
to zero.
for example, error clipping threshold, dropout and so on. See
:type inproj_bias_attr: ParameterAttribute | bool | None | Any
ExtraLayerAttribute for more details.
:param layer_attr: Extra layer attribute of the product. See ExtraLayerAttribute for
details.
:type layer_attr: ExtraLayerAttribute | None
:type layer_attr: ExtraLayerAttribute | None
:return: LayerOutput object.
:return: LayerOutput object.
:rtype: LayerOutput
:rtype: LayerOutput
...
@@ -6659,9 +6675,9 @@ def clip_layer(input, min, max, name=None):
...
@@ -6659,9 +6675,9 @@ def clip_layer(input, min, max, name=None):
:param input: The input of this layer.
:param input: The input of this layer.
:type input: LayerOutput.
:type input: LayerOutput.
:param min: The lower threshold for clipping.
:param min: The lower threshold for clipping.
:type min:
double
:type min:
float
:param max: The upper threshold for clipping.
:param max: The upper threshold for clipping.
:type max:
double
:type max:
float
:return: LayerOutput object.
:return: LayerOutput object.
:rtype: LayerOutput
:rtype: LayerOutput
"""
"""
...
@@ -6709,7 +6725,6 @@ def seq_slice_layer(input, starts, ends, name=None):
...
@@ -6709,7 +6725,6 @@ def seq_slice_layer(input, starts, ends, name=None):
:type ends: LayerOutput | None
:type ends: LayerOutput | None
:return: LayerOutput object.
:return: LayerOutput object.
:rtype: LayerOutput
:rtype: LayerOutput
"""
"""
assert
isinstance
(
input
,
LayerOutput
),
(
assert
isinstance
(
input
,
LayerOutput
),
(
...
@@ -6830,7 +6845,7 @@ def img_conv3d_layer(input,
...
@@ -6830,7 +6845,7 @@ def img_conv3d_layer(input,
:param padding: The numbers of padding along three axises. If the parameter is set to
:param padding: The numbers of padding along three axises. If the parameter is set to
one integer, they will be same.
one integer, they will be same.
:type padding: int | tuple | list
:type padding: int | tuple | list
:param bias_attr: The
Bias A
ttribute. If the parameter is set to
:param bias_attr: The
bias a
ttribute. If the parameter is set to
False or something not type of ParameterAttribute,
False or something not type of ParameterAttribute,
no bias is defined. If the parameter is set to
no bias is defined. If the parameter is set to
True, the bias is initialized to zero.
True, the bias is initialized to zero.
...
@@ -6839,11 +6854,13 @@ def img_conv3d_layer(input,
...
@@ -6839,11 +6854,13 @@ def img_conv3d_layer(input,
set to None, its actual value will be automatically set to
set to None, its actual value will be automatically set to
the channels number of the input .
the channels number of the input .
:type num_channels: int
:type num_channels: int
:param param_attr: The parameter attribute of the convolution.
:param param_attr: The parameter attribute of the convolution. See ParameterAttribute for
details.
:type param_attr: ParameterAttribute
:type param_attr: ParameterAttribute
:param shared_biases: Whether biases will be shared between filters or not.
:param shared_biases: Whether biases will be shared between filters or not.
:type shared_biases: bool
:type shared_biases: bool
:param layer_attr: Extra layer attributes.
:param layer_attr: The extra layer attributes. See ExtraLayerAttribute for
details.
:type layer_attr: ExtraLayerAttribute
:type layer_attr: ExtraLayerAttribute
:param trans: True if it is a convTransLayer, False if it is a convLayer
:param trans: True if it is a convTransLayer, False if it is a convLayer
:type trans: bool
:type trans: bool
...
@@ -6950,9 +6967,10 @@ def scale_shift_layer(input, name=None, param_attr=None, bias_attr=None):
...
@@ -6950,9 +6967,10 @@ def scale_shift_layer(input, name=None, param_attr=None, bias_attr=None):
:type name: basestring
:type name: basestring
:param input: The input of this layer.
:param input: The input of this layer.
:type input: LayerOutput
:type input: LayerOutput
:param param_attr: The parameter attribute of scaling.
:param param_attr: The parameter attribute of scaling. See ParameterAttribute for
details.
:type param_attr: ParameterAttribute
:type param_attr: ParameterAttribute
:param bias_attr: The
Bias A
ttribute. If the parameter is set to
:param bias_attr: The
bias a
ttribute. If the parameter is set to
False or something not type of ParameterAttribute,
False or something not type of ParameterAttribute,
no bias is defined. If the parameter is set to
no bias is defined. If the parameter is set to
True, the bias is initialized to zero.
True, the bias is initialized to zero.
...
@@ -7013,7 +7031,7 @@ def sub_seq_layer(input, offsets, sizes, act=None, bias_attr=None, name=None):
...
@@ -7013,7 +7031,7 @@ def sub_seq_layer(input, offsets, sizes, act=None, bias_attr=None, name=None):
:type sizes: LayerOutput
:type sizes: LayerOutput
:param act: Activation type, LinearActivation is the default.
:param act: Activation type, LinearActivation is the default.
:type act: BaseActivation.
:type act: BaseActivation.
:param bias_attr: The
Bias A
ttribute. If the parameter is set to
:param bias_attr: The
bias a
ttribute. If the parameter is set to
False or something not type of ParameterAttribute,
False or something not type of ParameterAttribute,
no bias is defined. If the parameter is set to
no bias is defined. If the parameter is set to
True, the bias is initialized to zero.
True, the bias is initialized to zero.
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录