Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleDetection
提交
d7b20584
P
PaddleDetection
项目概览
PaddlePaddle
/
PaddleDetection
大约 1 年 前同步成功
通知
695
Star
11112
Fork
2696
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
184
列表
看板
标记
里程碑
合并请求
40
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleDetection
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
184
Issue
184
列表
看板
标记
里程碑
合并请求
40
合并请求
40
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
d7b20584
编写于
9月 04, 2017
作者:
C
Cao Ying
提交者:
GitHub
9月 04, 2017
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #3845 from lcy-seso/rename_mse_to_square_error
rename mse_cost into square_error_cost.
上级
409ac4a3
10eacac9
变更
10
隐藏空白更改
内联
并排
Showing
10 changed file
with
35 addition
and
29 deletion
+35
-29
doc/api/v2/config/layer.rst
doc/api/v2/config/layer.rst
+2
-2
doc/getstarted/basic_usage/index_cn.rst
doc/getstarted/basic_usage/index_cn.rst
+2
-2
doc/getstarted/basic_usage/index_en.rst
doc/getstarted/basic_usage/index_en.rst
+1
-1
doc/getstarted/concepts/src/train.py
doc/getstarted/concepts/src/train.py
+1
-1
doc/getstarted/concepts/use_concepts_cn.rst
doc/getstarted/concepts/use_concepts_cn.rst
+3
-3
doc/howto/usage/k8s/k8s_distributed_cn.md
doc/howto/usage/k8s/k8s_distributed_cn.md
+1
-1
python/paddle/trainer_config_helpers/layers.py
python/paddle/trainer_config_helpers/layers.py
+17
-12
python/paddle/trainer_config_helpers/tests/configs/protostr/test_cost_layers_with_weight.protostr
...ts/configs/protostr/test_cost_layers_with_weight.protostr
+4
-4
python/paddle/trainer_config_helpers/tests/configs/test_cost_layers_with_weight.py
...fig_helpers/tests/configs/test_cost_layers_with_weight.py
+1
-1
python/paddle/v2/tests/test_layer.py
python/paddle/v2/tests/test_layer.py
+3
-2
未找到文件。
doc/api/v2/config/layer.rst
浏览文件 @
d7b20584
...
...
@@ -434,9 +434,9 @@ lambda_cost
.. autoclass:: paddle.v2.layer.lambda_cost
:noindex:
mse
_cost
square_error
_cost
--------
.. autoclass:: paddle.v2.layer.
mse
_cost
.. autoclass:: paddle.v2.layer.
square_error
_cost
:noindex:
rank_cost
...
...
doc/getstarted/basic_usage/index_cn.rst
浏览文件 @
d7b20584
...
...
@@ -55,7 +55,7 @@ PaddlePaddle是源于百度的一个深度学习平台。这份简短的介绍
# 线性计算网络层: ȳ = wx + b
ȳ = fc_layer(input=x, param_attr=ParamAttr(name='w'), size=1, act=LinearActivation(), bias_attr=ParamAttr(name='b'))
# 计算误差函数,即 ȳ 和真实 y 之间的距离
cost =
mse
_cost(input= ȳ, label=y)
cost =
square_error
_cost(input= ȳ, label=y)
outputs(cost)
...
...
@@ -69,7 +69,7 @@ PaddlePaddle是源于百度的一个深度学习平台。这份简短的介绍
- **数据层**:数据层 `data_layer` 是神经网络的入口,它读入数据并将它们传输到接下来的网络层。这里数据层有两个,分别对应于变量 `x` 和 `y`。
- **全连接层**:全连接层 `fc_layer` 是基础的计算单元,这里利用它建模变量之间的线性关系。计算单元是神经网络的核心,PaddlePaddle支持大量的计算单元和任意深度的网络连接,从而可以拟合任意的函数来学习复杂的数据关系。
- **回归误差代价层**:回归误差代价层 `
mse
_cost` 是众多误差代价函数层的一种,它们在训练过程作为网络的出口,用来计算模型的误差,是模型参数优化的目标函数。
- **回归误差代价层**:回归误差代价层 `
square_error
_cost` 是众多误差代价函数层的一种,它们在训练过程作为网络的出口,用来计算模型的误差,是模型参数优化的目标函数。
定义了网络结构并保存为 `trainer_config.py` 之后,运行以下训练命令:
...
...
doc/getstarted/basic_usage/index_en.rst
浏览文件 @
d7b20584
...
...
@@ -49,7 +49,7 @@ To recover this relationship between ``X`` and ``Y``, we use a neural network wi
x = data_layer(name='x', size=1)
y = data_layer(name='y', size=1)
y_predict = fc_layer(input=x, param_attr=ParamAttr(name='w'), size=1, act=LinearActivation(), bias_attr=ParamAttr(name='b'))
cost =
mse
_cost(input=y_predict, label=y)
cost =
square_error
_cost(input=y_predict, label=y)
outputs(cost)
Some of the most fundamental usages of PaddlePaddle are demonstrated:
...
...
doc/getstarted/concepts/src/train.py
浏览文件 @
d7b20584
...
...
@@ -8,7 +8,7 @@ paddle.init(use_gpu=False)
x
=
paddle
.
layer
.
data
(
name
=
'x'
,
type
=
paddle
.
data_type
.
dense_vector
(
2
))
y_predict
=
paddle
.
layer
.
fc
(
input
=
x
,
size
=
1
,
act
=
paddle
.
activation
.
Linear
())
y
=
paddle
.
layer
.
data
(
name
=
'y'
,
type
=
paddle
.
data_type
.
dense_vector
(
1
))
cost
=
paddle
.
layer
.
mse
_cost
(
input
=
y_predict
,
label
=
y
)
cost
=
paddle
.
layer
.
square_error
_cost
(
input
=
y_predict
,
label
=
y
)
# create parameters
parameters
=
paddle
.
parameters
.
create
(
cost
)
...
...
doc/getstarted/concepts/use_concepts_cn.rst
浏览文件 @
d7b20584
...
...
@@ -81,9 +81,9 @@ PaddlePaddle支持不同类型的输入数据,主要包括四种类型,和
.. code-block:: bash
y_predict = paddle.layer.fc(input=x, size=1, act=paddle.activation.Linear())
cost = paddle.layer.
mse
_cost(input=y_predict, label=y)
cost = paddle.layer.
square_error
_cost(input=y_predict, label=y)
其中,x与y为之前描述的输入层;而y_predict是接收x作为输入,接上一个全连接层;cost接收y_predict与y作为输入,接上
均
方误差层。
其中,x与y为之前描述的输入层;而y_predict是接收x作为输入,接上一个全连接层;cost接收y_predict与y作为输入,接上
平
方误差层。
最后一层cost中记录了神经网络的所有拓扑结构,通过组合不同的layer,我们即可完成神经网络的搭建。
...
...
@@ -147,4 +147,4 @@ PaddlePaddle支持不同类型的输入数据,主要包括四种类型,和
.. literalinclude:: src/train.py
:linenos:
有关线性回归的实际应用,可以参考PaddlePaddle book的 `第一章节 <http://book.paddlepaddle.org/index.html>`_。
\ No newline at end of file
有关线性回归的实际应用,可以参考PaddlePaddle book的 `第一章节 <http://book.paddlepaddle.org/index.html>`_。
doc/howto/usage/k8s/k8s_distributed_cn.md
浏览文件 @
d7b20584
...
...
@@ -213,7 +213,7 @@ I1116 09:10:17.123440 50 Util.cpp:130] Calling runInitFunctions
I1116 09:10:17.123764 50 Util.cpp:143] Call runInitFunctions
done
.
[
WARNING 2016-11-16 09:10:17,227 default_decorators.py:40] please use keyword arguments
in
paddle config.
[
INFO 2016-11-16 09:10:17,239 networks.py:1282] The input order is
[
movie_id, title, genres, user_id, gender, age, occupation, rating]
[
INFO 2016-11-16 09:10:17,239 networks.py:1289] The output order is
[
__
mse
_cost_0__]
[
INFO 2016-11-16 09:10:17,239 networks.py:1289] The output order is
[
__
square_error
_cost_0__]
I1116 09:10:17.392917 50 Trainer.cpp:170] trainer mode: Normal
I1116 09:10:17.613910 50 PyDataProvider2.cpp:257] loading dataprovider dataprovider::process
I1116 09:10:17.680917 50 PyDataProvider2.cpp:257] loading dataprovider dataprovider::process
...
...
python/paddle/trainer_config_helpers/layers.py
浏览文件 @
d7b20584
...
...
@@ -53,7 +53,7 @@ __all__ = [
'cos_sim'
,
'hsigmoid'
,
'conv_projection'
,
'
mse
_cost'
,
'
square_error
_cost'
,
'regression_cost'
,
'classification_cost'
,
'LayerOutput'
,
...
...
@@ -4238,13 +4238,18 @@ def __cost_input__(input, label, weight=None):
@
wrap_name_default
()
@
layer_support
()
def
mse_cost
(
input
,
label
,
weight
=
None
,
name
=
None
,
coeff
=
1.0
,
layer_attr
=
None
):
def
square_error_cost
(
input
,
label
,
weight
=
None
,
name
=
None
,
coeff
=
1.0
,
layer_attr
=
None
):
"""
mean squared
error cost:
sum of square
error cost:
.. math::
\\
frac{1}{N}
\sum_{i=1}^N(t_i-y_i)^2
cost =
\
\
sum_{i=1}^N(t_i-y_i)^2
:param name: layer name.
:type name: basestring
...
...
@@ -4273,7 +4278,7 @@ def mse_cost(input, label, weight=None, name=None, coeff=1.0, layer_attr=None):
return
LayerOutput
(
name
,
LayerType
.
COST
,
parents
=
parents
,
size
=
1
)
regression_cost
=
mse
_cost
regression_cost
=
square_error
_cost
@
wrap_name_default
(
"cost"
)
...
...
@@ -5798,9 +5803,9 @@ def huber_regression_cost(input,
coeff
=
1.0
,
layer_attr
=
None
):
"""
In statistics, the Huber loss is a loss function used in robust regression,
that is less sensitive to outliers in data than the squared error loss.
Given a prediction f(x), a label y and :math:`\delta`, the loss function
In statistics, the Huber loss is a loss function used in robust regression,
that is less sensitive to outliers in data than the squared error loss.
Given a prediction f(x), a label y and :math:`\delta`, the loss function
is defined as:
.. math:
...
...
@@ -5848,13 +5853,13 @@ def huber_classification_cost(input,
coeff
=
1.0
,
layer_attr
=
None
):
"""
For classification purposes, a variant of the Huber loss called modified Huber
is sometimes used. Given a prediction f(x) (a real-valued classifier score) and
a true binary class label :math:`y\in \left \{-1, 1
\r
ight \}`, the modified Huber
For classification purposes, a variant of the Huber loss called modified Huber
is sometimes used. Given a prediction f(x) (a real-valued classifier score) and
a true binary class label :math:`y\in \left \{-1, 1
\r
ight \}`, the modified Huber
loss is defined as:
.. math:
loss = \max \left ( 0, 1-yf(x)
\r
ight )^2, yf(x)\geq 1
loss = \max \left ( 0, 1-yf(x)
\r
ight )^2, yf(x)\geq 1
loss = -4yf(x),
\t
ext{otherwise}
The example usage is:
...
...
python/paddle/trainer_config_helpers/tests/configs/protostr/test_cost_layers_with_weight.protostr
浏览文件 @
d7b20584
...
...
@@ -45,7 +45,7 @@ layers {
coeff: 1.0
}
layers {
name: "__
mse
_cost_0__"
name: "__
square_error
_cost_0__"
type: "square_error"
size: 1
active_type: ""
...
...
@@ -130,7 +130,7 @@ input_layer_names: "label"
input_layer_names: "weight"
input_layer_names: "multi_class_label"
output_layer_names: "__cost_0__"
output_layer_names: "__
mse
_cost_0__"
output_layer_names: "__
square_error
_cost_0__"
output_layer_names: "__nce_layer_0__"
evaluators {
name: "classification_error_evaluator"
...
...
@@ -146,7 +146,7 @@ sub_models {
layer_names: "weight"
layer_names: "__fc_layer_0__"
layer_names: "__cost_0__"
layer_names: "__
mse
_cost_0__"
layer_names: "__
square_error
_cost_0__"
layer_names: "multi_class_label"
layer_names: "__nce_layer_0__"
input_layer_names: "input"
...
...
@@ -154,7 +154,7 @@ sub_models {
input_layer_names: "weight"
input_layer_names: "multi_class_label"
output_layer_names: "__cost_0__"
output_layer_names: "__
mse
_cost_0__"
output_layer_names: "__
square_error
_cost_0__"
output_layer_names: "__nce_layer_0__"
evaluator_names: "classification_error_evaluator"
is_recurrent_layer_group: false
...
...
python/paddle/trainer_config_helpers/tests/configs/test_cost_layers_with_weight.py
浏览文件 @
d7b20584
...
...
@@ -10,7 +10,7 @@ fc = fc_layer(input=data, size=10, act=SoftmaxActivation())
outputs
(
classification_cost
(
input
=
fc
,
label
=
lbl
,
weight
=
wt
),
mse
_cost
(
square_error
_cost
(
input
=
fc
,
label
=
lbl
,
weight
=
wt
),
nce_layer
(
input
=
fc
,
...
...
python/paddle/v2/tests/test_layer.py
浏览文件 @
d7b20584
...
...
@@ -134,8 +134,9 @@ class CostLayerTest(unittest.TestCase):
cost3
=
layer
.
cross_entropy_cost
(
input
=
inference
,
label
=
label
)
cost4
=
layer
.
cross_entropy_with_selfnorm_cost
(
input
=
inference
,
label
=
label
)
cost5
=
layer
.
mse_cost
(
input
=
inference
,
label
=
label
)
cost6
=
layer
.
mse_cost
(
input
=
inference
,
label
=
label
,
weight
=
weight
)
cost5
=
layer
.
square_error_cost
(
input
=
inference
,
label
=
label
)
cost6
=
layer
.
square_error_cost
(
input
=
inference
,
label
=
label
,
weight
=
weight
)
cost7
=
layer
.
multi_binary_label_cross_entropy_cost
(
input
=
inference
,
label
=
label
)
cost8
=
layer
.
rank_cost
(
left
=
score
,
right
=
score
,
label
=
score
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录