Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Crayon鑫
Paddle
提交
784e242b
P
Paddle
项目概览
Crayon鑫
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
784e242b
编写于
3月 22, 2017
作者:
G
gaoyuan
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Remove redundancy codes
上级
57c355a1
变更
4
隐藏空白更改
内联
并排
Showing
4 changed file
with
11 addition
and
7 deletion
+11
-7
doc/api/v2/config/layer.rst
doc/api/v2/config/layer.rst
+6
-0
paddle/gserver/layers/CrossChannelNormLayer.cpp
paddle/gserver/layers/CrossChannelNormLayer.cpp
+0
-2
paddle/gserver/layers/NormLayer.h
paddle/gserver/layers/NormLayer.h
+4
-5
python/paddle/trainer_config_helpers/layers.py
python/paddle/trainer_config_helpers/layers.py
+1
-0
未找到文件。
doc/api/v2/config/layer.rst
浏览文件 @
784e242b
...
@@ -109,6 +109,12 @@ sum_to_one_norm
...
@@ -109,6 +109,12 @@ sum_to_one_norm
:members: sum_to_one_norm
:members: sum_to_one_norm
:noindex:
:noindex:
cross_channel_norm
---------------
.. automodule:: paddle.v2.layer
:members: cross_channel_norm
:noindex:
Recurrent Layers
Recurrent Layers
================
================
...
...
paddle/gserver/layers/CrossChannelNormLayer.cpp
浏览文件 @
784e242b
...
@@ -40,7 +40,6 @@ void CrossChannelNormLayer::forward(PassType passType) {
...
@@ -40,7 +40,6 @@ void CrossChannelNormLayer::forward(PassType passType) {
normBuffer_
->
addScalar
(
*
normBuffer_
,
1e-6
);
normBuffer_
->
addScalar
(
*
normBuffer_
,
1e-6
);
inV
->
square2
(
*
dataBuffer_
);
inV
->
square2
(
*
dataBuffer_
);
for
(
size_t
i
=
0
;
i
<
batchSize
;
i
++
)
{
for
(
size_t
i
=
0
;
i
<
batchSize
;
i
++
)
{
spatialBuffer_
->
zeroMem
();
MatrixPtr
inTmp
=
Matrix
::
create
(
MatrixPtr
inTmp
=
Matrix
::
create
(
inV
->
getData
()
+
i
*
dataDim
,
channels_
,
spatialDim
,
false
,
useGpu_
);
inV
->
getData
()
+
i
*
dataDim
,
channels_
,
spatialDim
,
false
,
useGpu_
);
MatrixPtr
dataTmp
=
Matrix
::
create
(
dataBuffer_
->
getData
()
+
i
*
dataDim
,
MatrixPtr
dataTmp
=
Matrix
::
create
(
dataBuffer_
->
getData
()
+
i
*
dataDim
,
...
@@ -80,7 +79,6 @@ void CrossChannelNormLayer::backward(const UpdateCallback& callback) {
...
@@ -80,7 +79,6 @@ void CrossChannelNormLayer::backward(const UpdateCallback& callback) {
scaleDiff_
->
zeroMem
();
scaleDiff_
->
zeroMem
();
for
(
size_t
i
=
0
;
i
<
batchSize
;
i
++
)
{
for
(
size_t
i
=
0
;
i
<
batchSize
;
i
++
)
{
spatialBuffer_
->
zeroMem
();
spatialBuffer_
->
zeroMem
();
channelBuffer_
->
zeroMem
();
// propagate to param.
// propagate to param.
MatrixPtr
dataBufferTmp
=
MatrixPtr
dataBufferTmp
=
Matrix
::
create
(
dataBuffer_
->
getData
()
+
i
*
dataDim
,
Matrix
::
create
(
dataBuffer_
->
getData
()
+
i
*
dataDim
,
...
...
paddle/gserver/layers/NormLayer.h
浏览文件 @
784e242b
...
@@ -66,11 +66,10 @@ public:
...
@@ -66,11 +66,10 @@ public:
};
};
/**
/**
* This layer applys normalize across the channels of each sample to a
* This layer applys normalization across the channels of each sample to a
* conv layer's output and scale the output by a group of trainable factors
* conv layer's output, and scales the output by a group of trainable factors
* which dimensions equal to the channel's number.
* whose equal to the number of channels.
* - Input: One and only one input layer are accepted. The input layer must be
* - Input: One and only one input layer are accepted.
* be a data output layer.
* - Output: The normalized data of the input data.
* - Output: The normalized data of the input data.
* Reference:
* Reference:
* Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed,
* Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed,
...
...
python/paddle/trainer_config_helpers/layers.py
浏览文件 @
784e242b
...
@@ -1015,6 +1015,7 @@ def cross_channel_norm_layer(input, name=None, param_attr=None):
...
@@ -1015,6 +1015,7 @@ def cross_channel_norm_layer(input, name=None, param_attr=None):
This layer applys normalize across the channels of each sample to
This layer applys normalize across the channels of each sample to
a conv layer's output and scale the output by a group of trainable
a conv layer's output and scale the output by a group of trainable
factors which dimensions equal to the channel's number.
factors which dimensions equal to the channel's number.
:param name: The Layer Name.
:param name: The Layer Name.
:type name: basestring
:type name: basestring
:param input: The input layer.
:param input: The input layer.
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录