Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Crayon鑫
Paddle
提交
0ae67091
P
Paddle
项目概览
Crayon鑫
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
0ae67091
编写于
6月 14, 2018
作者:
Q
qiaolongfei
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
update document
上级
76129f03
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
16 addition
and
12 deletion
+16
-12
python/paddle/fluid/layers/control_flow.py
python/paddle/fluid/layers/control_flow.py
+9
-7
python/paddle/fluid/layers/learning_rate_scheduler.py
python/paddle/fluid/layers/learning_rate_scheduler.py
+5
-5
python/paddle/fluid/layers/nn.py
python/paddle/fluid/layers/nn.py
+2
-0
未找到文件。
python/paddle/fluid/layers/control_flow.py
浏览文件 @
0ae67091
...
...
@@ -76,13 +76,13 @@ def split_lod_tensor(input, mask, level=0):
Examples:
.. code-block:: python
x = layers.data(name='x', shape=[1])
x =
fluid.
layers.data(name='x', shape=[1])
x.persistable = True
y = layers.data(name='y', shape=[1])
y =
fluid.
layers.data(name='y', shape=[1])
y.persistable = True
out_true, out_false = layers.split_lod_tensor(
out_true, out_false =
fluid.
layers.split_lod_tensor(
input=x, mask=y, level=level)
"""
...
...
@@ -891,7 +891,7 @@ def array_write(x, i, array=None):
def
create_array
(
dtype
):
"""
**Create LoDTensor
Array**
**Create LoDTensorArray**
This function creates an array of LOD_TENSOR_ARRAY . It is mainly used to
implement RNN with array_write, array_read and While.
...
...
@@ -989,7 +989,8 @@ def array_read(array, i):
Returns:
Variable: The tensor type variable that has the data written to it.
Examples:
.. code-block::python
.. code-block:: python
tmp = fluid.layers.zeros(shape=[10], dtype='int32')
i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10)
arr = layers.array_read(tmp, i=i)
...
...
@@ -1027,7 +1028,7 @@ def shrink_memory(x, i, table):
def
array_length
(
array
):
"""
**Get the
l
ength of Input LoDTensorArray**
**Get the
L
ength of Input LoDTensorArray**
This function performs the operation to find the length of the input
LOD_TENSOR_ARRAY.
...
...
@@ -1042,12 +1043,13 @@ def array_length(array):
Variable: The length of the input LoDTensorArray.
Examples:
.. code-block::python
.. code-block::
python
tmp = fluid.layers.zeros(shape=[10], dtype='int32')
i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10)
arr = fluid.layers.array_write(tmp, i=i)
arr_len = fluid.layers.array_length(arr)
"""
helper
=
LayerHelper
(
'array_length'
,
**
locals
())
tmp
=
helper
.
create_tmp_variable
(
dtype
=
'int64'
)
...
...
python/paddle/fluid/layers/learning_rate_scheduler.py
浏览文件 @
0ae67091
...
...
@@ -163,11 +163,11 @@ def polynomial_decay(learning_rate,
power
=
1.0
,
cycle
=
False
):
"""
**
polynomial_d
ecay**
**
Polynomial D
ecay**
Applies polynomial decay to the initial learning rate.
.. code-block::python
.. code-block::
python
if cycle:
decay_steps = decay_steps * ceil(global_step / decay_steps)
...
...
@@ -180,9 +180,9 @@ def polynomial_decay(learning_rate,
learning_rate(Variable|float32): A scalar float32 value or a Variable. This
will be the initial learning rate during training
decay_steps(int32): A Python `int32` number.
end_learning_rate(float): A Python `float` number.
power(float): A Python `float` number
cycle(bool, Default False): Boolean. If set true, decay the learning rate every decay_steps.
end_learning_rate(float
, Default: 0.0001
): A Python `float` number.
power(float
, Default: 1.0
): A Python `float` number
cycle(bool, Default
:
False): Boolean. If set true, decay the learning rate every decay_steps.
Returns:
The decayed learning rate
...
...
python/paddle/fluid/layers/nn.py
浏览文件 @
0ae67091
...
...
@@ -1615,7 +1615,9 @@ def batch_norm(input,
Can be used as a normalizer function for conv2d and fully_connected operations.
The required data format for this layer is one of the following:
1. NHWC `[batch, in_height, in_width, in_channels]`
2. NCHW `[batch, in_channels, in_height, in_width]`
Refer to `Batch Normalization: Accelerating Deep Network Training by Reducing
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录