Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Crayon鑫
Paddle
提交
c6c9c657
P
Paddle
项目概览
Crayon鑫
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
c6c9c657
编写于
6月 13, 2018
作者:
F
fengjiayi
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
update doc
上级
8453740b
变更
4
隐藏空白更改
内联
并排
Showing
4 changed file
with
155 addition
and
66 deletion
+155
-66
python/paddle/fluid/layers/control_flow.py
python/paddle/fluid/layers/control_flow.py
+25
-11
python/paddle/fluid/layers/learning_rate_scheduler.py
python/paddle/fluid/layers/learning_rate_scheduler.py
+53
-17
python/paddle/fluid/layers/nn.py
python/paddle/fluid/layers/nn.py
+49
-24
python/paddle/fluid/layers/tensor.py
python/paddle/fluid/layers/tensor.py
+28
-14
未找到文件。
python/paddle/fluid/layers/control_flow.py
浏览文件 @
c6c9c657
...
@@ -748,16 +748,25 @@ def max_sequence_len(rank_table):
...
@@ -748,16 +748,25 @@ def max_sequence_len(rank_table):
def
lod_tensor_to_array
(
x
,
table
):
def
lod_tensor_to_array
(
x
,
table
):
""" Convert a LOD_TENSOR to an LOD_TENSOR_ARRAY.
"""
Convert a LoDTensor to a LoDTensorArray.
This function split a LoDTesnor to a LoDTensorArray according to its LoD
information. LoDTensorArray is an alias of C++ std::vector<LoDTensor> in
Paddle. The generated LoDTensorArray of this function can be further read
or written by 'read_from_array()' and 'write_to_array()' operators. However,
this function is generally an internal component of Paddle 'DynamicRNN'.
Users should not use it directly.
Args:
Args:
x (Variable|list): The L
OD tensor to be converted to a LOD tensor a
rray.
x (Variable|list): The L
oDTensor to be converted to a LoDTensorA
rray.
table (ParamAttr|list): The variable that stores the level of lod
table (ParamAttr|list): The variable that stores the level of lod
which is ordered by sequence length in
which is ordered by sequence length in
descending order.
descending order. It is generally generated
by 'layers.lod_rank_table()' API.
Returns:
Returns:
Variable: The
variable of type array that has been converted from a
Variable: The
LoDTensorArray that has been converted from the input
tensor.
tensor.
Examples:
Examples:
...
@@ -1047,6 +1056,13 @@ def array_length(array):
...
@@ -1047,6 +1056,13 @@ def array_length(array):
class
ConditionalBlockGuard
(
BlockGuard
):
class
ConditionalBlockGuard
(
BlockGuard
):
"""
ConditionalBlockGuard is derived from BlockGuard. It is dedicated for
holding a ConditionalBlock, and helping users entering and exiting the
ConditionalBlock via Python's 'with' keyword. However, ConditionalBlockGuard
is generally an internal component of IfElse, users should not use it directly.
"""
def
__init__
(
self
,
block
):
def
__init__
(
self
,
block
):
if
not
isinstance
(
block
,
ConditionalBlock
):
if
not
isinstance
(
block
,
ConditionalBlock
):
raise
TypeError
(
"block should be conditional block"
)
raise
TypeError
(
"block should be conditional block"
)
...
@@ -1563,17 +1579,15 @@ def reorder_lod_tensor_by_rank(x, rank_table):
...
@@ -1563,17 +1579,15 @@ def reorder_lod_tensor_by_rank(x, rank_table):
def
is_empty
(
x
,
cond
=
None
,
**
ignored
):
def
is_empty
(
x
,
cond
=
None
,
**
ignored
):
"""
"""
**Is Empty**
Test whether an Variable is empty.
This layer returns the truth value of whether the variable is empty.
Args:
Args:
x
(Variable): Operand of *is_empty*
x
(Variable): The Variable to be tested.
cond
(Variable|None): Optional output variable to store the result
cond
(Variable|None): Output parameter. Returns the test result
of *is_empty*
of given 'x'.
Returns:
Returns:
Variable: The tensor variable storing the
output of *is_empty*
.
Variable: The tensor variable storing the
test result of 'x'
.
Raises:
Raises:
TypeError: If input cond is not a variable, or cond's dtype is
TypeError: If input cond is not a variable, or cond's dtype is
...
...
python/paddle/fluid/layers/learning_rate_scheduler.py
浏览文件 @
c6c9c657
...
@@ -70,21 +70,40 @@ def noam_decay(d_model, warmup_steps):
...
@@ -70,21 +70,40 @@ def noam_decay(d_model, warmup_steps):
def
exponential_decay
(
learning_rate
,
decay_steps
,
decay_rate
,
staircase
=
False
):
def
exponential_decay
(
learning_rate
,
decay_steps
,
decay_rate
,
staircase
=
False
):
"""Applies exponential decay to the learning rate.
"""
Applies exponential decay to the learning rate.
When training a model, it is often recommended to lower the learning rate as the
training progresses. By using this function, the learning rate will be decayed by
'decay_rate' every 'decay_steps' steps.
>>> if staircase == True:
>>> decayed_learning_rate = learning_rate * decay_rate ^ floor(global_step / decay_steps)
>>> else:
>>> decayed_learning_rate = learning_rate * decay_rate ^ (global_step / decay_steps)
```python
decayed_learning_rate = learning_rate *
decay_rate ^ (global_step / decay_steps)
```
Args:
Args:
learning_rate
: A scalar float32 value or a Variable. This
learning_rate
(Variable|float): The initial learning rate.
will be the initial learning rate during training
decay_steps(int): See the decay computation above.
decay_
steps: A Python `int32` number
.
decay_
rate(float): The decay rate. See the decay computation above
.
decay_rate: A Python `float` number
.
staircase(Boolean): If True, decay the learning rate at discrete intervals
.
staircase: Boolean. If set true, decay the learning rate every decay_steps.
Default: False
Returns:
Returns:
The decayed learning rate
The decayed learning rate
Examples:
.. code-block:: python
base_lr = 0.1
sgd_optimizer = fluid.optimizer.SGD(
learning_rate=fluid.layers.exponential_decay(
learning_rate=base_lr,
decay_steps=10000,
decay_rate=0.5,
staircase=True))
sgd_optimizer.minimize(avg_cost)
"""
"""
global_step
=
_decay_step_counter
()
global_step
=
_decay_step_counter
()
...
@@ -128,22 +147,39 @@ def natural_exp_decay(learning_rate, decay_steps, decay_rate, staircase=False):
...
@@ -128,22 +147,39 @@ def natural_exp_decay(learning_rate, decay_steps, decay_rate, staircase=False):
def
inverse_time_decay
(
learning_rate
,
decay_steps
,
decay_rate
,
staircase
=
False
):
def
inverse_time_decay
(
learning_rate
,
decay_steps
,
decay_rate
,
staircase
=
False
):
"""Applies inverse time decay to the initial learning rate.
"""
Applies inverse time decay to the initial learning rate.
>>> if staircase:
When training a model, it is often recommended to lower the learning rate as the
training progresses. By using this function, an inverse decay function will be
applied to the initial learning rate.
>>> if staircase == True:
>>> decayed_learning_rate = learning_rate / (1 + decay_rate * floor(global_step / decay_step))
>>> decayed_learning_rate = learning_rate / (1 + decay_rate * floor(global_step / decay_step))
>>> else:
>>> else:
>>> decayed_learning_rate = learning_rate / (1 + decay_rate * global_step / decay_step)
>>> decayed_learning_rate = learning_rate / (1 + decay_rate * global_step / decay_step)
Args:
Args:
learning_rate
: A scalar float32 value or a Variable. This
learning_rate
(Variable|float): The initial learning rate.
will be the initial learning rate during training
.
decay_steps(int): See the decay computation above
.
decay_
steps: A Python `int32` number
.
decay_
rate(float): The decay rate. See the decay computation above
.
decay_rate: A Python `float` number
.
staircase(Boolean): If True, decay the learning rate at discrete intervals
.
staircase: Boolean. If set true, decay the learning rate every decay_steps.
Default: False
Returns:
Returns:
The decayed learning rate
The decayed learning rate
Examples:
.. code-block:: python
base_lr = 0.1
sgd_optimizer = fluid.optimizer.SGD(
learning_rate=fluid.layers.inverse_time_decay(
learning_rate=base_lr,
decay_steps=10000,
decay_rate=0.5,
staircase=True))
sgd_optimizer.minimize(avg_cost)
"""
"""
global_step
=
_decay_step_counter
()
global_step
=
_decay_step_counter
()
...
...
python/paddle/fluid/layers/nn.py
浏览文件 @
c6c9c657
...
@@ -102,14 +102,15 @@ def fc(input,
...
@@ -102,14 +102,15 @@ def fc(input,
"""
"""
**Fully Connected Layer**
**Fully Connected Layer**
The fully connected layer can take multiple tensors as its inputs. It
This function creates a fully connected layer in the network. It can take
creates a variable called weights for each input tensor, which represents
multiple tensors as its inputs. It creates a variable called weights for
a fully connected weight matrix from each input unit to each output unit.
each input tensor, which represents a fully connected weight matrix from
The fully connected layer multiplies each input tensor with its coresponding
each input unit to each output unit. The fully connected layer multiplies
weight to produce an output Tensor. If multiple input tensors are given,
each input tensor with its coresponding weight to produce an output Tensor.
the results of multiple multiplications will be sumed up. If bias_attr is
If multiple input tensors are given, the results of multiple multiplications
not None, a bias variable will be created and added to the output. Finally,
will be sumed up. If bias_attr is not None, a bias variable will be created
if activation is not None, it will be applied to the output as well.
and added to the output. Finally, if activation is not None, it will be applied
to the output as well.
This process can be formulated as follows:
This process can be formulated as follows:
...
@@ -878,7 +879,7 @@ def cos_sim(X, Y):
...
@@ -878,7 +879,7 @@ def cos_sim(X, Y):
Args:
Args:
X (Variable): The input X.
X (Variable): The input X.
Y (Variable): The input Y.
Y (Variable): The input Y.
Returns:
Returns:
Variable: the output of cosine(X, Y).
Variable: the output of cosine(X, Y).
"""
"""
...
@@ -1083,7 +1084,7 @@ def chunk_eval(input,
...
@@ -1083,7 +1084,7 @@ def chunk_eval(input,
chunk_scheme (str): ${chunk_scheme_comment}
chunk_scheme (str): ${chunk_scheme_comment}
num_chunk_types (int): ${num_chunk_types_comment}
num_chunk_types (int): ${num_chunk_types_comment}
excluded_chunk_types (list): ${excluded_chunk_types_comment}
excluded_chunk_types (list): ${excluded_chunk_types_comment}
Returns:
Returns:
tuple: tuple containing: (precision, recall, f1_score,
tuple: tuple containing: (precision, recall, f1_score,
num_infer_chunks, num_label_chunks,
num_infer_chunks, num_label_chunks,
...
@@ -1143,7 +1144,7 @@ def sequence_conv(input,
...
@@ -1143,7 +1144,7 @@ def sequence_conv(input,
bias_attr (ParamAttr|None): attributes for bias
bias_attr (ParamAttr|None): attributes for bias
param_attr (ParamAttr|None): attributes for parameter
param_attr (ParamAttr|None): attributes for parameter
act (str): the activation type
act (str): the activation type
Returns:
Returns:
Variable: output of sequence_conv
Variable: output of sequence_conv
"""
"""
...
@@ -1509,6 +1510,7 @@ def sequence_last_step(input):
...
@@ -1509,6 +1510,7 @@ def sequence_last_step(input):
return
sequence_pool
(
input
=
input
,
pool_type
=
"last"
)
return
sequence_pool
(
input
=
input
,
pool_type
=
"last"
)
@
templatedoc
()
def
pool2d
(
input
,
def
pool2d
(
input
,
pool_size
=-
1
,
pool_size
=-
1
,
pool_type
=
"max"
,
pool_type
=
"max"
,
...
@@ -1520,12 +1522,12 @@ def pool2d(input,
...
@@ -1520,12 +1522,12 @@ def pool2d(input,
use_mkldnn
=
False
,
use_mkldnn
=
False
,
name
=
None
):
name
=
None
):
"""
"""
This function adds the operator for pooling in 2 dimensions, using the
${comment}
pooling configurations mentioned in input parameters.
Args:
Args:
input (Variable): ${input_comment}
input (Variable): ${input_comment}
pool_size (int): ${ksize_comment}
pool_size (int): The side length of pooling windows. All pooling
windows are squares with pool_size on a side.
pool_type (str): ${pooling_type_comment}
pool_type (str): ${pooling_type_comment}
pool_stride (int): stride of the pooling layer.
pool_stride (int): stride of the pooling layer.
pool_padding (int): padding size.
pool_padding (int): padding size.
...
@@ -1533,11 +1535,29 @@ def pool2d(input,
...
@@ -1533,11 +1535,29 @@ def pool2d(input,
use_cudnn (bool): ${use_cudnn_comment}
use_cudnn (bool): ${use_cudnn_comment}
ceil_mode (bool): ${ceil_mode_comment}
ceil_mode (bool): ${ceil_mode_comment}
use_mkldnn (bool): ${use_mkldnn_comment}
use_mkldnn (bool): ${use_mkldnn_comment}
name (str
): A name for this layer(optional). If set None, the layer
name (str
|None): A name for this layer(optional). If set None, the
will be named automatically.
layer
will be named automatically.
Returns:
Returns:
Variable: output of pool2d layer.
Variable: output of pool2d layer.
Raises:
ValueError: If 'pool_type' is not "max" nor "avg"
ValueError: If 'global_pooling' is False and 'pool_size' is -1
ValueError: If 'use_cudnn' is not a bool value.
Examples:
.. code-block:: python
data = fluid.layers.data(
name='data', shape=[3, 32, 32], dtype='float32')
conv2d = fluid.layers.pool2d(
input=data,
pool_size=2,
pool_type='max',
pool_stride=1,
global_pooling=False)
"""
"""
if
pool_type
not
in
[
"max"
,
"avg"
]:
if
pool_type
not
in
[
"max"
,
"avg"
]:
raise
ValueError
(
raise
ValueError
(
...
@@ -1800,7 +1820,7 @@ def beam_search_decode(ids, scores, name=None):
...
@@ -1800,7 +1820,7 @@ def beam_search_decode(ids, scores, name=None):
ids (Variable): ${ids_comment}
ids (Variable): ${ids_comment}
scores (Variable): ${scores_comment}
scores (Variable): ${scores_comment}
name (str): The name of this layer. It is optional.
name (str): The name of this layer. It is optional.
Returns:
Returns:
tuple: a tuple of two output variable: sentence_ids, sentence_scores
tuple: a tuple of two output variable: sentence_ids, sentence_scores
"""
"""
...
@@ -2063,7 +2083,7 @@ def beam_search(pre_ids, ids, scores, beam_size, end_id, level=0):
...
@@ -2063,7 +2083,7 @@ def beam_search(pre_ids, ids, scores, beam_size, end_id, level=0):
beam_size (int): ${beam_size_comment}
beam_size (int): ${beam_size_comment}
end_id (int): ${end_id_comment}
end_id (int): ${end_id_comment}
level (int): ${level_comment}
level (int): ${level_comment}
Returns:
Returns:
tuple: a tuple of beam_search output variables: selected_ids, selected_scores
tuple: a tuple of beam_search output variables: selected_ids, selected_scores
'''
'''
...
@@ -2719,7 +2739,7 @@ def topk(input, k, name=None):
...
@@ -2719,7 +2739,7 @@ def topk(input, k, name=None):
This operator is used to find values and indices of the k largest entries
This operator is used to find values and indices of the k largest entries
for the last dimension.
for the last dimension.
If the input is a vector (
rank=1
), finds the k largest entries in the vector
If the input is a vector (
1-D Tensor
), finds the k largest entries in the vector
and outputs their values and indices as vectors. Thus values[j] is the j-th
and outputs their values and indices as vectors. Thus values[j] is the j-th
largest entry in input, and its index is indices[j].
largest entry in input, and its index is indices[j].
...
@@ -2729,9 +2749,11 @@ def topk(input, k, name=None):
...
@@ -2729,9 +2749,11 @@ def topk(input, k, name=None):
Args:
Args:
input(Variable): The input variable which can be a vector or Tensor with
input(Variable): The input variable which can be a vector or Tensor with
higher rank.
higher rank.
k(int): An integer value to specify the top k largest elements.
k(int): The number of top elements to look for along the last dimension
of input.
name(str|None): A name for this layer(optional). If set None, the layer
name(str|None): A name for this layer(optional). If set None, the layer
will be named automatically.
will be named automatically.
Default: None
Returns:
Returns:
values(Variable): The k largest elements along each last dimensional
values(Variable): The k largest elements along each last dimensional
...
@@ -2739,13 +2761,16 @@ def topk(input, k, name=None):
...
@@ -2739,13 +2761,16 @@ def topk(input, k, name=None):
indices(Variable): The indices of values within the last dimension of
indices(Variable): The indices of values within the last dimension of
input.
input.
Raises:
ValueError: If k < 1 or k is not less than the last dimension of input
Examples:
Examples:
.. code-block:: python
.. code-block:: python
top5_values, top5_indices = layers.topk(input, k=5)
top5_values, top5_indices = layers.topk(input, k=5)
"""
"""
shape
=
input
.
shape
shape
=
input
.
shape
if
k
<
1
and
k
>=
shape
[
-
1
]:
if
k
<
1
or
k
>=
shape
[
-
1
]:
raise
ValueError
(
"k must be greater than 0 and less than %d."
%
raise
ValueError
(
"k must be greater than 0 and less than %d."
%
(
shape
[
-
1
]))
(
shape
[
-
1
]))
...
@@ -3045,7 +3070,7 @@ def nce(input,
...
@@ -3045,7 +3070,7 @@ def nce(input,
param_attr (ParamAttr|None): attributes for parameter
param_attr (ParamAttr|None): attributes for parameter
bias_attr (ParamAttr|None): attributes for bias
bias_attr (ParamAttr|None): attributes for bias
num_neg_samples (int): ${num_neg_samples_comment}
num_neg_samples (int): ${num_neg_samples_comment}
Returns:
Returns:
Variable: output of nce layer.
Variable: output of nce layer.
"""
"""
...
...
python/paddle/fluid/layers/tensor.py
浏览文件 @
c6c9c657
...
@@ -79,20 +79,33 @@ def create_global_var(shape,
...
@@ -79,20 +79,33 @@ def create_global_var(shape,
force_cpu
=
False
,
force_cpu
=
False
,
name
=
None
):
name
=
None
):
"""
"""
Create a global variable. such as global_step
Create a new variable in the global block(block 0).
Args:
Args:
shape(list[int]): shape of the variable
shape(list[int]): shape of the variable
value(float): the value of the variable
value(float): the value of the variable. The new created
dtype(string): element type of the parameter
variable will be filled with it.
persistable(bool): if this variable is persistable
dtype(string): data type of the variable
force_cpu(bool): force this variable to be on CPU
persistable(bool): if this variable is persistable.
Default: False
force_cpu(bool): force this variable to be on CPU.
Default: False
name(str|None): The name of the variable. If set to None the variable
name will be generated automatically.
Default: None
Returns:
Returns:
Variable: the created Variable
Variable: the created Variable
Examples:
.. code-block:: python
var = fluid.create_global_var(shape=[2,3], value=1.0, dtype='float32',
persistable=True, force_cpu=True, name='new_var')
"""
"""
helper
=
LayerHelper
(
"global_var"
,
**
locals
())
helper
=
LayerHelper
(
"global_var"
,
**
locals
())
var
=
helper
.
create_global_variable
(
var
=
helper
.
create_global_variable
(
dtype
=
dtype
,
shape
=
shape
,
persistable
=
persistable
,
name
=
name
)
dtype
=
dtype
,
shape
=
shape
,
persistable
=
persistable
)
helper
.
set_variable_initializer
(
helper
.
set_variable_initializer
(
var
,
initializer
=
Constant
(
var
,
initializer
=
Constant
(
value
=
float
(
value
),
force_cpu
=
force_cpu
))
value
=
float
(
value
),
force_cpu
=
force_cpu
))
...
@@ -152,10 +165,11 @@ def sums(input, out=None):
...
@@ -152,10 +165,11 @@ def sums(input, out=None):
Args:
Args:
input (Variable|list): The input tensor that has the elements
input (Variable|list): The input tensor that has the elements
that need to be summed up.
that need to be summed up.
out (Variable|None): Output parameter. Returns the sum result.
Default: None
Returns:
Returns:
Variable: The tensor type variable that has the sum of input
Variable: the sum of input. The same as the argument 'out'
written to it.
Examples:
Examples:
.. code-block::python
.. code-block::python
...
@@ -328,13 +342,13 @@ def argmin(x, axis=0):
...
@@ -328,13 +342,13 @@ def argmin(x, axis=0):
x(Variable): The input to compute the indices of
x(Variable): The input to compute the indices of
the min elements.
the min elements.
axis(int): Axis to compute indices along.
axis(int): Axis to compute indices along.
Returns:
Returns:
Variable: The tensor variable storing the output
Variable: The tensor variable storing the output
Examples:
Examples:
.. code-block:: python
.. code-block:: python
out = fluid.layers.argmin(x=in, axis=0)
out = fluid.layers.argmin(x=in, axis=0)
out = fluid.layers.argmin(x=in, axis=-1)
out = fluid.layers.argmin(x=in, axis=-1)
"""
"""
...
@@ -359,13 +373,13 @@ def argmax(x, axis=0):
...
@@ -359,13 +373,13 @@ def argmax(x, axis=0):
x(Variable): The input to compute the indices of
x(Variable): The input to compute the indices of
the max elements.
the max elements.
axis(int): Axis to compute indices along.
axis(int): Axis to compute indices along.
Returns:
Returns:
Variable: The tensor variable storing the output
Variable: The tensor variable storing the output
Examples:
Examples:
.. code-block:: python
.. code-block:: python
out = fluid.layers.argmax(x=in, axis=0)
out = fluid.layers.argmax(x=in, axis=0)
out = fluid.layers.argmax(x=in, axis=-1)
out = fluid.layers.argmax(x=in, axis=-1)
"""
"""
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录