Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
机器未来
Paddle
提交
004df46f
P
Paddle
项目概览
机器未来
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
004df46f
编写于
2月 12, 2018
作者:
X
xuwei06
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Make print_op able to show the value of bool tensor
And some minor fixes on comments.
上级
432d2b5d
变更
4
隐藏空白更改
内联
并排
Showing
4 changed file
with
25 addition
and
24 deletion
+25
-24
paddle/fluid/operators/elementwise_op_function.h
paddle/fluid/operators/elementwise_op_function.h
+0
-1
paddle/fluid/operators/print_op.cc
paddle/fluid/operators/print_op.cc
+9
-7
python/paddle/v2/fluid/layers/control_flow.py
python/paddle/v2/fluid/layers/control_flow.py
+1
-1
python/paddle/v2/fluid/layers/nn.py
python/paddle/v2/fluid/layers/nn.py
+15
-15
未找到文件。
paddle/fluid/operators/elementwise_op_function.h
浏览文件 @
004df46f
...
...
@@ -314,7 +314,6 @@ EIGEN_FUNCTOR(Div, EIGEN_DIV);
template
<
typename
DeviceContext
,
typename
T
,
typename
functor
,
typename
broadcastfunctor
,
typename
broadcast2functor
>
void
ElementwiseGradCompute
(
const
framework
::
ExecutionContext
&
ctx
,
const
framework
::
Tensor
*
x
,
const
framework
::
Tensor
*
y
,
const
framework
::
Tensor
*
out
,
...
...
paddle/fluid/operators/print_op.cc
浏览文件 @
004df46f
...
...
@@ -46,7 +46,7 @@ struct Formater {
}
private:
void
PrintMessage
()
{
CLOG
<<
std
::
time
(
nullptr
)
<<
"
\t
"
<<
message
;
}
void
PrintMessage
()
{
CLOG
<<
std
::
time
(
nullptr
)
<<
"
\t
"
<<
message
<<
"
\t
"
;
}
void
PrintName
()
{
if
(
!
name
.
empty
())
{
CLOG
<<
"Tensor["
<<
name
<<
"]"
<<
std
::
endl
;
...
...
@@ -85,15 +85,16 @@ struct Formater {
// print float
if
(
dtype
.
hash_code
()
==
typeid
(
float
).
hash_code
())
{
Display
<
float
>
(
size
);
}
if
(
dtype
.
hash_code
()
==
typeid
(
double
).
hash_code
())
{
}
else
if
(
dtype
.
hash_code
()
==
typeid
(
double
).
hash_code
())
{
Display
<
double
>
(
size
);
}
if
(
dtype
.
hash_code
()
==
typeid
(
int
).
hash_code
())
{
}
else
if
(
dtype
.
hash_code
()
==
typeid
(
int
).
hash_code
())
{
Display
<
int
>
(
size
);
}
if
(
dtype
.
hash_code
()
==
typeid
(
int64_t
).
hash_code
())
{
}
else
if
(
dtype
.
hash_code
()
==
typeid
(
int64_t
).
hash_code
())
{
Display
<
int64_t
>
(
size
);
}
else
if
(
dtype
.
hash_code
()
==
typeid
(
bool
).
hash_code
())
{
Display
<
bool
>
(
size
);
}
else
{
CLOG
<<
"
\t
data: unprintable type: "
<<
dtype
.
name
()
<<
std
::
endl
;
}
}
...
...
@@ -182,6 +183,7 @@ class TensorPrintOp : public framework::OperatorBase {
}
Formater
formater
;
formater
.
message
=
Attr
<
std
::
string
>
(
"message"
);
if
(
Attr
<
bool
>
(
"print_tensor_name"
))
{
formater
.
name
=
printed_var_name
;
}
...
...
python/paddle/v2/fluid/layers/control_flow.py
浏览文件 @
004df46f
...
...
@@ -174,7 +174,7 @@ def Print(input,
print_tensor_type (bool): Print the tensor type.
print_tensor_shape (bool): Print the tensor shape.
print_tensor_lod (bool): Print the tensor lod.
print_phase (
bool
): Which phase to displace, including 'forward',
print_phase (
str
): Which phase to displace, including 'forward',
'backward' and 'both'. If set to 'backward' or 'both', will
print the gradients of input tensor.
...
...
python/paddle/v2/fluid/layers/nn.py
浏览文件 @
004df46f
...
...
@@ -1579,7 +1579,7 @@ def layer_norm(input,
"""
**Layer Normalization**
Assume feature vectors exist on dimensions
Assume feature vectors exist on dimensions
:attr:`begin_norm_axis ... rank(input)` and calculate the moment statistics
along these dimensions for each feature vector :math:`a` with size
:math:`H`, then normalize each feature vector using the corresponding
...
...
@@ -1600,13 +1600,13 @@ def layer_norm(input,
Args:
input(Variable): The input tensor variable.
scale(bool): Whether to learn the adaptive gain :math:`g` after
scale(bool): Whether to learn the adaptive gain :math:`g` after
normalization.
shift(bool): Whether to learn the adaptive bias :math:`b` after
shift(bool): Whether to learn the adaptive bias :math:`b` after
normalization.
begin_norm_axis(bool): The normalization will be performed along
begin_norm_axis(bool): The normalization will be performed along
dimensions from :attr:`begin_norm_axis` to :attr:`rank(input)`.
epsilon(float): The small value added to the variance to prevent
epsilon(float): The small value added to the variance to prevent
division by zero.
param_attr(ParamAttr|None): The parameter attribute for the learnable
gain :math:`g`.
...
...
@@ -2070,7 +2070,7 @@ def reduce_sum(input, dim=None, keep_dim=False, name=None):
Tensor variable with a single element, otherwise must be in the
range :math:`[-rank(input), rank(input))`. If :math:`dim < 0`,
the dimension to reduce is :math:`rank + dim`.
keep_dim (bool): Whether to reserve the reduced dimension in the
keep_dim (bool
|False
): Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
than the :attr:`input` unless :attr:`keep_dim` is true.
name(str|None): A name for this layer(optional). If set None, the layer
...
...
@@ -3098,33 +3098,33 @@ def multiplex(inputs, index):
def
softmax_with_cross_entropy
(
logits
,
label
,
soft_label
=
False
):
"""
**Softmax With Cross Entropy Operator.**
Cross entropy loss with softmax is used as the output layer extensively. This
operator computes the softmax normalized values for each row of the input
tensor, after which cross-entropy loss is computed. This provides a more
numerically stable gradient.
Because this operator performs a softmax on logits internally, it expects
unscaled logits. This operator should not be used with the output of
softmax operator since that would produce incorrect results.
When the attribute soft_label is set false, this operators expects mutually
exclusive hard labels, each sample in a batch is in exactly one class with a
probability of 1.0. Each sample in the batch will have a single label.
The equation is as follows:
1) Hard label (one-hot label, so every sample has exactly one class)
.. math::
loss_j = -
\\
text{logit}_{label_j} +
\\
log
\\
left(
\\
sum_{i=0}^{K}
\\
exp(
\\
text{logit}_i)
\\
right), j = 1,..., K
2) Soft label (each sample can have a distribution over all classes)
.. math::
loss_j = -
\\
sum_{i=0}^{K}
\\
text{label}_i
\\
left(
\\
text{logit}_i -
\\
log
\\
left(
\\
sum_{i=0}^{K}
\\
exp(
\\
text{logit}_i)
\\
right)
\\
right), j = 1,...,K
...
...
@@ -3169,7 +3169,7 @@ def smooth_l1(x, y, inside_weight=None, outside_weight=None, sigma=None):
The operator takes the first dimension of X and Y as batch size.
For each instance, it computes the smooth l1 loss element by element first
and then sums all the losses. So the shape of Out is [batch_size, 1].
Args:
x (Variable): A tensor with rank at least 2. The input value of smooth
l1 loss op with shape [batch_size, dim1, ..., dimN].
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录