Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
机器未来
Paddle
提交
82e4fab4
P
Paddle
项目概览
机器未来
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
82e4fab4
编写于
8月 23, 2017
作者:
C
caoying03
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
follow comments.
上级
b7359ee3
变更
4
隐藏空白更改
内联
并排
Showing
4 changed file
with
58 addition
and
65 deletion
+58
-65
paddle/gserver/layers/KmaxSeqScoreLayer.cpp
paddle/gserver/layers/KmaxSeqScoreLayer.cpp
+14
-12
paddle/gserver/layers/SequenceSliceLayer.cpp
paddle/gserver/layers/SequenceSliceLayer.cpp
+26
-37
paddle/gserver/layers/SubNestedSequenceLayer.cpp
paddle/gserver/layers/SubNestedSequenceLayer.cpp
+17
-12
python/paddle/trainer/config_parser.py
python/paddle/trainer/config_parser.py
+1
-4
未找到文件。
paddle/gserver/layers/KmaxSeqScoreLayer.cpp
浏览文件 @
82e4fab4
...
@@ -80,13 +80,14 @@ void KmaxSeqScoreLayer::forward(PassType passType) {
...
@@ -80,13 +80,14 @@ void KmaxSeqScoreLayer::forward(PassType passType) {
<<
"input of "
<<
getName
()
<<
"input of "
<<
getName
()
<<
" must be a sequence or a nested sequence."
;
<<
" must be a sequence or a nested sequence."
;
CHECK_EQ
(
input
.
value
->
getWidth
(),
1UL
)
CHECK_EQ
(
input
.
value
->
getWidth
(),
1UL
)
<<
"input of "
<<
getName
()
<<
"input of "
<<
getName
()
<<
" are scores over a sequence or "
<<
" is score over a sequence or a nested sequence, so its width "
<<
"a nested sequence, so its width must be 1."
;
<<
" must be 1."
;
if
(
useGpu_
)
{
if
(
useGpu_
)
{
// this Layer runs only in CPU, if the model is runing on GPU,
/*
// then copy the input to this layer from GPU to CPU.
* currently, this Layer only runs in CPU, if the other part of the model is
* runing on GPU, then copy the input to this layer from GPU to CPU.
*/
Matrix
::
resizeOrCreate
(
scores_
,
Matrix
::
resizeOrCreate
(
scores_
,
inputScore
->
getHeight
(),
inputScore
->
getHeight
(),
1
,
1
,
...
@@ -97,13 +98,14 @@ void KmaxSeqScoreLayer::forward(PassType passType) {
...
@@ -97,13 +98,14 @@ void KmaxSeqScoreLayer::forward(PassType passType) {
scores_
=
inputScore
;
scores_
=
inputScore
;
}
}
// TODO(caoying)
/*
// In PaddlePaddle, the currently available matrixes all a have real-typed
* TODO(caoying)
// data field, but the selected indices information are actually int-typed
* In PaddePaddle, currently all matrices are real number types,
// (with -1 as a special token). Storing indices information in real-typed
* but output of this layer which is some selected indices of the give
// Matrix leads to converting real to int. This is very dangerous if a user
* sequence are actually filled with int types so that storing int types
// fills this matrix himself, invalid data may occur.
* information in a real number matrix is dangerous, since real numbers will
// The selected indices should be stored in an int-typed matrix.
* be convered to int types.
*/
Matrix
::
resizeOrCreate
(
Matrix
::
resizeOrCreate
(
output_
.
value
,
output_
.
value
,
input
.
hasSubseq
()
?
input
.
getNumSubSequences
()
:
input
.
getNumSequences
(),
input
.
hasSubseq
()
?
input
.
getNumSubSequences
()
:
input
.
getNumSequences
(),
...
...
paddle/gserver/layers/SequenceSliceLayer.cpp
浏览文件 @
82e4fab4
...
@@ -31,13 +31,15 @@ public:
...
@@ -31,13 +31,15 @@ public:
void
backward
(
const
UpdateCallback
&
callback
=
nullptr
)
override
;
void
backward
(
const
UpdateCallback
&
callback
=
nullptr
)
override
;
private:
private:
// TODO(caoying)
/*
// In PaddlePaddle, the currently available matrixes all a have real-typed
* TODO(caoying)
// data field, but the selected indices information are actually int-typed
* In PaddePaddle, currently all matrices are real number types,
// (with -1 as a special token). Storing indices information in real-typed
* but the second and the (optional) third input which are some
// Matrix leads to converting real to int. This is very dangerous if a user
* selected indices of the give sequence to trim the sequence, are actually
// fills this matrix himself, invalid data may occur.
* filled with int types so that storing int types information in real number
// The selected indices should be stored in an int-typed matrix.
* matrices is very dangerous, since real numbers will be convered to int
* types. If a user fills this matrix himself, invalid data may occor.
*/
MatrixPtr
startIdsOnCpu_
;
MatrixPtr
startIdsOnCpu_
;
MatrixPtr
endIdsOnCpu_
;
MatrixPtr
endIdsOnCpu_
;
...
@@ -68,7 +70,7 @@ bool SequenceSliceLayer::init(const LayerMap& layerMap,
...
@@ -68,7 +70,7 @@ bool SequenceSliceLayer::init(const LayerMap& layerMap,
void
SequenceSliceLayer
::
checkInputs
()
{
void
SequenceSliceLayer
::
checkInputs
()
{
const
Argument
&
inputSeq
=
getInput
(
0
);
const
Argument
&
inputSeq
=
getInput
(
0
);
CHECK
(
inputSeq
.
hasSeq
())
<<
"The first input of sequence slic layer "
CHECK
(
inputSeq
.
hasSeq
())
<<
"The first input of sequence slic
e
layer "
<<
"must be a sequence."
;
<<
"must be a sequence."
;
const
MatrixPtr
indices1
=
getInputValue
(
1
);
const
MatrixPtr
indices1
=
getInputValue
(
1
);
CHECK_EQ
(
static_cast
<
size_t
>
(
indices1
->
getHeight
()),
CHECK_EQ
(
static_cast
<
size_t
>
(
indices1
->
getHeight
()),
...
@@ -86,22 +88,6 @@ void SequenceSliceLayer::checkInputs() {
...
@@ -86,22 +88,6 @@ void SequenceSliceLayer::checkInputs() {
}
}
void
SequenceSliceLayer
::
copySliceIdsToCpu
()
{
void
SequenceSliceLayer
::
copySliceIdsToCpu
()
{
if
(
!
useGpu_
)
{
if
(
inputLayers_
.
size
()
==
2U
)
{
if
(
config_
.
select_first
())
{
startIdsOnCpu_
=
getInputValue
(
1
);
endIdsOnCpu_
=
nullptr
;
}
else
{
startIdsOnCpu_
=
nullptr
;
endIdsOnCpu_
=
getInputValue
(
1
);
}
}
else
if
(
inputLayers_
.
size
()
==
3U
)
{
startIdsOnCpu_
=
getInputValue
(
1
);
endIdsOnCpu_
=
getInputValue
(
2
);
}
return
;
}
const
MatrixPtr
indices1
=
getInputValue
(
1
);
const
MatrixPtr
indices1
=
getInputValue
(
1
);
if
(
inputLayers_
.
size
()
==
2U
)
{
if
(
inputLayers_
.
size
()
==
2U
)
{
if
(
config_
.
select_first
())
{
if
(
config_
.
select_first
())
{
...
@@ -141,22 +127,19 @@ void SequenceSliceLayer::copySliceIdsToCpu() {
...
@@ -141,22 +127,19 @@ void SequenceSliceLayer::copySliceIdsToCpu() {
void
SequenceSliceLayer
::
calSelectedRows
(
const
MatrixPtr
starts
,
void
SequenceSliceLayer
::
calSelectedRows
(
const
MatrixPtr
starts
,
const
MatrixPtr
ends
)
{
const
MatrixPtr
ends
)
{
CHECK
(
starts
&&
ends
);
outSeqStartPos_
.
resize
(
1
,
0
);
outSeqStartPos_
.
resize
(
1
,
0
);
outSubSeqStartPos_
.
resize
(
1
,
0
);
outSubSeqStartPos_
.
resize
(
1
,
0
);
selectedRows_
.
clear
();
selectedRows_
.
clear
();
size_t
beamSize
=
starts
?
starts
->
getWidth
()
:
ends
->
getWidth
();
size_t
beamSize
=
starts
?
starts
->
getWidth
()
:
ends
->
getWidth
();
// iterate over sequence
size_t
rowIdx
=
0
;
size_t
rowIdx
=
0
;
for
(
size_t
i
=
0
;
i
<
inputSeqInfoVec_
.
size
();
++
i
)
{
for
(
size_t
i
=
0
;
i
<
inputSeqInfoVec_
.
size
();
++
i
)
{
// iterate over sub-sequence in a sequence
for
(
size_t
j
=
0
;
j
<
inputSeqInfoVec_
[
i
].
size
()
-
1
;
++
j
)
{
for
(
size_t
j
=
0
;
j
<
inputSeqInfoVec_
[
i
].
size
()
-
1
;
++
j
)
{
// iterate over each index for slicing.
for
(
size_t
k
=
0
;
k
<
beamSize
;
++
k
)
{
for
(
size_t
k
=
0
;
k
<
beamSize
;
++
k
)
{
if
(
starts
)
{
if
(
starts
&&
starts
->
getElement
(
rowIdx
,
k
)
==
-
1.
)
break
;
if
(
starts
->
getElement
(
rowIdx
,
k
)
==
-
1.
)
break
;
if
(
ends
&&
ends
->
getElement
(
rowIdx
,
k
)
==
-
1.
)
break
;
}
else
if
(
ends
->
getElement
(
rowIdx
,
k
)
==
-
1.
)
break
;
int
begPos
=
inputSeqInfoVec_
[
i
][
j
];
int
begPos
=
inputSeqInfoVec_
[
i
][
j
];
if
(
starts
)
begPos
+=
starts
->
getElement
(
rowIdx
,
k
);
if
(
starts
)
begPos
+=
starts
->
getElement
(
rowIdx
,
k
);
...
@@ -165,7 +148,7 @@ void SequenceSliceLayer::calSelectedRows(const MatrixPtr starts,
...
@@ -165,7 +148,7 @@ void SequenceSliceLayer::calSelectedRows(const MatrixPtr starts,
if
(
ends
)
endPos
=
inputSeqInfoVec_
[
i
][
j
]
+
ends
->
getElement
(
rowIdx
,
k
);
if
(
ends
)
endPos
=
inputSeqInfoVec_
[
i
][
j
]
+
ends
->
getElement
(
rowIdx
,
k
);
int
seqLen
=
endPos
-
begPos
+
1
;
int
seqLen
=
endPos
-
begPos
+
1
;
CHECK
(
seqLen
);
CHECK
_LT
(
seqLen
,
0U
);
for
(
int
m
=
begPos
;
m
<=
endPos
;
++
m
)
selectedRows_
.
push_back
(
m
);
for
(
int
m
=
begPos
;
m
<=
endPos
;
++
m
)
selectedRows_
.
push_back
(
m
);
inputSeqInfoVec_
.
size
()
>
1
inputSeqInfoVec_
.
size
()
>
1
?
outSubSeqStartPos_
.
push_back
(
outSubSeqStartPos_
.
back
()
+
seqLen
)
?
outSubSeqStartPos_
.
push_back
(
outSubSeqStartPos_
.
back
()
+
seqLen
)
...
@@ -208,7 +191,16 @@ void SequenceSliceLayer::forward(PassType passType) {
...
@@ -208,7 +191,16 @@ void SequenceSliceLayer::forward(PassType passType) {
Argument
::
reorganizeSeqInfo
(
inputSeq
.
sequenceStartPositions
,
Argument
::
reorganizeSeqInfo
(
inputSeq
.
sequenceStartPositions
,
inputSeq
.
subSequenceStartPositions
,
inputSeq
.
subSequenceStartPositions
,
inputSeqInfoVec_
);
inputSeqInfoVec_
);
copySliceIdsToCpu
();
if
(
!
useGpu_
)
{
if
(
inputLayers_
.
size
()
==
2U
)
{
startIdsOnCpu_
=
config_
.
select_first
()
?
getInputValue
(
1
)
:
nullptr
;
endIdsOnCpu_
=
config_
.
select_first
()
?
nullptr
:
getInputValue
(
1
);
}
else
if
(
inputLayers_
.
size
()
==
3U
)
{
startIdsOnCpu_
=
getInputValue
(
1
);
endIdsOnCpu_
=
getInputValue
(
2
);
}
}
else
copySliceIdsToCpu
();
// calculate the selected row indices in a batch,
// calculate the selected row indices in a batch,
// and build the output sequence information.
// and build the output sequence information.
...
@@ -221,10 +213,7 @@ void SequenceSliceLayer::forward(PassType passType) {
...
@@ -221,10 +213,7 @@ void SequenceSliceLayer::forward(PassType passType) {
}
}
void
SequenceSliceLayer
::
backward
(
const
UpdateCallback
&
callback
)
{
void
SequenceSliceLayer
::
backward
(
const
UpdateCallback
&
callback
)
{
MatrixPtr
inputSeqGrad
=
getInputGrad
(
0
);
getOutputGrad
()
->
addToRows
(
*
getInputGrad
(
0
),
*
rowIndice_
);
MatrixPtr
outputGrad
=
getOutputGrad
();
outputGrad
->
addToRows
(
*
inputSeqGrad
,
*
rowIndice_
);
}
}
}
// namespace paddle
}
// namespace paddle
paddle/gserver/layers/SubNestedSequenceLayer.cpp
浏览文件 @
82e4fab4
...
@@ -58,23 +58,28 @@ private:
...
@@ -58,23 +58,28 @@ private:
void
calSelectedRows
(
const
MatrixPtr
selectedIndices
,
void
calSelectedRows
(
const
MatrixPtr
selectedIndices
,
const
std
::
vector
<
std
::
vector
<
int
>>&
inputSeqInfo
);
const
std
::
vector
<
std
::
vector
<
int
>>&
inputSeqInfo
);
// if the second input of this layer is on GPU memory, copy it to CPU memory.
/*
// TODO(caoying)
* TODO(caoying)
// In PaddlePaddle, the currently available matrixes all a have real-typed
* In PaddePaddle, currently all matrices are real number types,
// data field, but the selected indices information are actually int-typed
* but the second is some selected indices of the give sequence to trim
// (with -1 as a special token). Storing indices information in real-typed
* the nested sequence, are actually filled with int types so that storing
// Matrix leads to converting real to int. This is very dangerous if a user
* int types information in real number matrices is very dangerous, since
// fills this matrix himself, invalid data may occur.
* real numbers will be convered to int types. If a user fills this matrix
// The selected indices should be stored in an int-typed matrix.
* himself, invalid data may occor.
*
* if the second input of this layer is on GPU memory, copy it to CPU memory.
*/
MatrixPtr
selIdsCpu_
;
MatrixPtr
selIdsCpu_
;
// reorganized sequenceStartPositions and subSequenceStartPositions
/*
// into a 2d vector to facilitate the sequence selection process.
* reorganize sequenceStartPositions and subSequenceStartPositions
* into a 2d vector to facilitate the sequence selection process.
*/
std
::
vector
<
std
::
vector
<
int
>>
inputSeqInfoVec_
;
std
::
vector
<
std
::
vector
<
int
>>
inputSeqInfoVec_
;
// the final selected row indices in a batch,
/* store the final selected row indices in a batch */
// rowIndice_ and selectedRows_ actually share a same memory.
IVectorPtr
rowIndice_
;
IVectorPtr
rowIndice_
;
/* rowIndice_ and selectedRows_ actually share a same memory. */
std
::
vector
<
int
>
selectedRows_
;
std
::
vector
<
int
>
selectedRows_
;
};
};
...
...
python/paddle/trainer/config_parser.py
浏览文件 @
82e4fab4
...
@@ -2717,10 +2717,7 @@ class SeqSliceLayer(LayerBase):
...
@@ -2717,10 +2717,7 @@ class SeqSliceLayer(LayerBase):
'If start and end indices are both given to'
'If start and end indices are both given to'
'sequence slice layer, they should have the same width.'
)
'sequence slice layer, they should have the same width.'
)
elif
len
(
inputs
)
==
2
:
elif
len
(
inputs
)
==
2
:
if
starts
is
not
None
:
self
.
config
.
select_first
=
(
starts
is
not
None
)
self
.
config
.
select_first
=
True
else
:
self
.
config
.
select_first
=
False
@
config_layer
(
'sub_nested_seq'
)
@
config_layer
(
'sub_nested_seq'
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录