Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
Paddle
提交
fb2aa717
P
Paddle
项目概览
PaddlePaddle
/
Paddle
大约 1 年 前同步成功
通知
2298
Star
20931
Fork
5422
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1423
列表
看板
标记
里程碑
合并请求
543
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1,423
Issue
1,423
列表
看板
标记
里程碑
合并请求
543
合并请求
543
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
fb2aa717
编写于
11月 04, 2017
作者:
K
kexinzhao
提交者:
Yi Wang
11月 04, 2017
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Polish Operators Docs (r) (#5377)
* polish r operator docs * fix on naming convention
上级
30a85204
变更
6
隐藏空白更改
内联
并排
Showing
6 changed file
with
60 addition
and
47 deletion
+60
-47
paddle/operators/name_convention.md
paddle/operators/name_convention.md
+6
-2
paddle/operators/rank_loss_op.cc
paddle/operators/rank_loss_op.cc
+14
-14
paddle/operators/recurrent_op.cc
paddle/operators/recurrent_op.cc
+9
-7
paddle/operators/reduce_op.cc
paddle/operators/reduce_op.cc
+10
-7
paddle/operators/reshape_op.cc
paddle/operators/reshape_op.cc
+6
-3
paddle/operators/rmsprop_op.cc
paddle/operators/rmsprop_op.cc
+15
-14
未找到文件。
paddle/operators/name_convention.md
浏览文件 @
fb2aa717
...
...
@@ -44,17 +44,21 @@ public:
AddOutput
(
"Out"
,
"(Tensor) Accumulated output tensor"
);
AddAttr
<
float
>
(
"gamma"
,
"(float, default 1.0) Accumulation multiplier"
).
SetDefault
(
1.0
f
);
AddComment
(
R"DOC(
Accumulate operator accumulates the input tensor to the output tensor. If the
Accumulate Operator.
This operator accumulates the input tensor to the output tensor. If the
output tensor already has the right size, we add to it; otherwise, we first
initialize the output tensor to all zeros, and then do accumulation. Any
further calls to the operator, given that no one else fiddles with the output
in the interim, will do simple accumulations.
Accumulation is done as shown:
Accumulation is done as follows:
Out = 1*X + gamma*Out
where X is the input tensor, Out is the output tensor and gamma is the multiplier
argument.
)DOC"
);
}
};
...
...
paddle/operators/rank_loss_op.cc
浏览文件 @
fb2aa717
...
...
@@ -26,9 +26,9 @@ class RankLossOp : public framework::OperatorWithKernel {
void
InferShape
(
framework
::
InferShapeContext
*
ctx
)
const
override
{
// input check
PADDLE_ENFORCE
(
ctx
->
HasInput
(
"Label"
),
"Input(Label) shouldn't be null"
);
PADDLE_ENFORCE
(
ctx
->
HasInput
(
"Left"
),
"Input(Left) shouldn't be null"
);
PADDLE_ENFORCE
(
ctx
->
HasInput
(
"Right"
),
"Input(Right) shouldn't be null"
);
PADDLE_ENFORCE
(
ctx
->
HasInput
(
"Label"
),
"Input(Label) shouldn't be null
.
"
);
PADDLE_ENFORCE
(
ctx
->
HasInput
(
"Left"
),
"Input(Left) shouldn't be null
.
"
);
PADDLE_ENFORCE
(
ctx
->
HasInput
(
"Right"
),
"Input(Right) shouldn't be null
.
"
);
auto
label_dims
=
ctx
->
GetInputDim
(
"Label"
);
auto
left_dims
=
ctx
->
GetInputDim
(
"Left"
);
...
...
@@ -50,32 +50,32 @@ class RankLossOpMaker : public framework::OpProtoAndCheckerMaker {
AddInput
(
"Label"
,
"The label indicating A ranked higher than B or not, row vector."
);
AddInput
(
"Left"
,
"The output of RankNet for doc A, vector."
);
AddInput
(
"Right"
,
"The output of RankNet for doc B, vetor"
);
AddInput
(
"Right"
,
"The output of RankNet for doc B, vetor
.
"
);
AddOutput
(
"Out"
,
"The output loss of RankLoss operator, vector."
);
AddComment
(
R"DOC(RankLoss operator
AddComment
(
R"DOC(
RankLoss Operator.
Rank loss operator for RankNet[1]. RankNet is a pairwise ranking model with
RankLoss operator for RankNet
(http://icml.cc/2015/wp-content/uploads/2015/06/icml_ranking.pdf).
RankNet is a pairwise ranking model with
one training sample consisting of a pair of doc A and B, and the label P
indicating that A is ranked higher than B or not:
P = {0, 1} or {0, 0.5, 1}, where 0.5 means no information about the rank of
the input pair.
The RankLoss operator
contain
s three inputs: Left (o_i), Right (o_j) and Label
(P_{i,j}), which represent the output of RankNet for t
wo docs and the label
respectively, and yields the rank loss C_{i,j}
by following the expression
The RankLoss operator
take
s three inputs: Left (o_i), Right (o_j) and Label
(P_{i,j}), which represent the output of RankNet for t
he two docs and the label,
respectively, and yields the rank loss C_{i,j}
using the following equation:
\f
[
\f
$$
C_{i,j} = -\tilde{P_{ij}} * o_{i,j} + log(1 + e^{o_{i,j}}) \\
o_{i,j} = o_i - o_j \\
\tilde{P_{i,j}} = \left \{0, 0.5, 1 \right \} \ or \ \left \{0, 1 \right \}
\f
]
\f
$$
The operator can take inputs of one sample or in batch.
[1]. Chris Burges, Tal Shaked, Erin Renshaw, et al. Learning to
Rank using Gradient Descent.
http://icml.cc/2015/wp-content/uploads/2015/06/icml_ranking.pdf
)DOC"
);
}
};
...
...
paddle/operators/recurrent_op.cc
浏览文件 @
fb2aa717
...
...
@@ -509,14 +509,14 @@ class RecurrentOpProtoMaker : public framework::OpProtoAndCheckerMaker {
AddInput
(
kInitialStates
,
"rnn initial states"
).
AsDuplicable
();
AddInput
(
kParameters
,
"Parameters are used by step block as its input. However, the "
"input
s
is not a sequence tensor. Every time step, each operator "
"in step block just use the parameter directly"
)
"input is not a sequence tensor. Every time step, each operator "
"in step block just use the parameter directly
.
"
)
.
AsDuplicable
();
AddOutput
(
kOutputs
,
"The output sequence of RNN. The sequence length must be same"
)
"The output sequence of RNN. The sequence length must be same
.
"
)
.
AsDuplicable
();
AddOutput
(
kStepScopes
,
"StepScopes contain
s
all local variables in each time step."
);
"StepScopes contain all local variables in each time step."
);
AddAttr
<
std
::
vector
<
std
::
string
>>
(
kExStates
,
string
::
Sprintf
(
R"DOC(The ex-state variable names.
...
...
@@ -556,10 +556,12 @@ if reverse is True
o o o o
)DOC"
).
SetDefault
(
false
);
AddAttr
<
bool
>
(
kIsTrain
,
""
).
SetDefault
(
true
);
AddComment
(
R"DOC(Static Length Recurrent Operator
AddComment
(
R"DOC(
Static Length Recurrent Operator.
The static length recurrent operator can only operate on fixed size sequence
data, i.e. in each mini-batch, the sequence length of all inputs are the same.
The static length recurrent operator can only operate on fix sized sequence
data, i.e. in each mini-batch, the sequence length of all inputs are same.
)DOC"
);
}
};
...
...
paddle/operators/reduce_op.cc
浏览文件 @
fb2aa717
...
...
@@ -80,24 +80,27 @@ class ReduceOpMaker : public framework::OpProtoAndCheckerMaker {
public:
ReduceOpMaker
(
framework
::
OpProto
*
proto
,
framework
::
OpAttrChecker
*
op_checker
)
:
OpProtoAndCheckerMaker
(
proto
,
op_checker
)
{
AddInput
(
"X"
,
"(Tensor) The input tensor. Tensors with rank at most 6 are supported
"
);
AddInput
(
"X"
,
"(Tensor) The input tensor. Tensors with rank at most 6 are "
"supported.
"
);
AddOutput
(
"Out"
,
"(Tensor) The result tensor."
);
AddAttr
<
int
>
(
"dim"
,
"(int, default
1
) The dimension to reduce. "
"(int, default
0
) The dimension to reduce. "
"Must be in the range [-rank(input), rank(input)). "
"If `dim < 0`, the dim to reduce is `rank + dim`. "
"Not
ing
that reducing on the first dim will make the LoD info lost."
)
"Not
e
that reducing on the first dim will make the LoD info lost."
)
.
SetDefault
(
0
);
AddAttr
<
bool
>
(
"keep_dim"
,
"(bool, default false) "
"If true, retain the reduced dimension with length 1."
)
.
SetDefault
(
false
);
comment_
=
R"DOC(
{ReduceOP} operator computes the {reduce} of input tensor along the given dimension.
The result tensor has 1 fewer dimension than the input unless `keep_dim` is true.
{ReduceOp} Operator.
This operator computes the {reduce} of input tensor along the given dimension.
The result tensor has 1 fewer dimension than the input unless keep_dim is true.
)DOC"
;
AddComment
(
comment_
);
}
...
...
paddle/operators/reshape_op.cc
浏览文件 @
fb2aa717
...
...
@@ -71,8 +71,11 @@ class ReshapeOpMaker : public framework::OpProtoAndCheckerMaker {
:
OpProtoAndCheckerMaker
(
proto
,
op_checker
)
{
AddInput
(
"X"
,
"The input tensor of reshape operator."
);
AddOutput
(
"Out"
,
"The output tensor of reshape operator."
);
AddAttr
<
std
::
vector
<
int
>>
(
"shape"
,
"Target shape of reshape operator."
);
AddComment
(
R"DOC(Reshape operator
AddAttr
<
std
::
vector
<
int
>>
(
"shape"
,
"(vector<int>) "
"Target shape of reshape operator."
);
AddComment
(
R"DOC(
Reshape Operator.
Reshape Input(X) into the shape specified by Attr(shape).
...
...
@@ -81,7 +84,7 @@ Given a 2-D tensor X with 2 rows and 2 columns
[[1, 2], [3, 4]]
with
target shape = [1, 4], the reshape operator will transform
and
target shape = [1, 4], the reshape operator will transform
the tensor X into a 1-D tensor:
[1, 2, 3, 4]
...
...
paddle/operators/rmsprop_op.cc
浏览文件 @
fb2aa717
...
...
@@ -68,22 +68,22 @@ class RmspropOpMaker : public framework::OpProtoAndCheckerMaker {
:
OpProtoAndCheckerMaker
(
proto
,
op_checker
)
{
AddInput
(
"Param"
,
"(Tensor, default Tensor<float>) "
"Input parameter value that has to be updated"
);
"Input parameter value that has to be updated
.
"
);
AddInput
(
"MeanSquare"
,
"(Tensor, default Tensor<float>)"
" The mean square value that gets updated"
);
" The mean square value that gets updated
.
"
);
AddInput
(
"LearningRate"
,
"(Tensor, default Tensor<float>) "
"The learning rate should be a tensor of size 1"
);
"The learning rate should be a tensor of size 1
.
"
);
AddInput
(
"Grad"
,
"(Tensor, default Tensor<float>) "
"Input gradient of the parameter"
);
"Input gradient of the parameter
.
"
);
AddInput
(
"Moment"
,
"(Tensor, default Tensor<float>) The moment that gets updated"
);
"(Tensor, default Tensor<float>) The moment that gets updated
.
"
);
AddOutput
(
"ParamOut"
,
"(Tensor) Output updated parameter value"
);
AddOutput
(
"MomentOut"
,
"(Tensor) Output updated moment"
);
AddOutput
(
"MeanSquareOut"
,
"(Tensor) Output Mean squared updated value"
);
AddOutput
(
"ParamOut"
,
"(Tensor) Output updated parameter value
.
"
);
AddOutput
(
"MomentOut"
,
"(Tensor) Output updated moment
.
"
);
AddOutput
(
"MeanSquareOut"
,
"(Tensor) Output Mean squared updated value
.
"
);
AddAttr
<
float
>
(
"epsilon"
,
"(float, default 1e-10) Constant "
...
...
@@ -93,18 +93,19 @@ class RmspropOpMaker : public framework::OpProtoAndCheckerMaker {
"(float, default 0.9) "
"Discounting factor for coming gradient."
)
.
SetDefault
(
0.9
f
);
AddAttr
<
float
>
(
"momentum"
,
"(float, default 0.0) Constant value"
)
AddAttr
<
float
>
(
"momentum"
,
"(float, default 0.0) Constant value
.
"
)
.
SetDefault
(
0.0
f
);
AddComment
(
R"DOC(
Rmsprop Optimizer.
RMSprop
MeanSquareOut = decay * MeanSquare + (1 - decay) * Grad * Grad
$$
MeanSquareOut = decay * MeanSquare + (1 - decay) * Grad * Grad \\
MomentOut = momentum * Moment +
LearningRate * Grad / sqrt(MeanSquareOut + epsilon)
\frac{LearningRate * Grad}{\sqrt{MeanSquareOut + epsilon}} \\
ParamOut = Param - MomentOut
$$
The original slides that proposed R
MS
prop: Slide 29 of
The original slides that proposed R
ms
prop: Slide 29 of
http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf)
)DOC"
);
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录