Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
Paddle
提交
70c29322
P
Paddle
项目概览
PaddlePaddle
/
Paddle
大约 1 年 前同步成功
通知
2298
Star
20931
Fork
5422
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1423
列表
看板
标记
里程碑
合并请求
543
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1,423
Issue
1,423
列表
看板
标记
里程碑
合并请求
543
合并请求
543
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
70c29322
编写于
12月 09, 2016
作者:
D
dangqingqing
浏览文件
操作
浏览文件
下载
差异文件
fix conflicts
上级
212162b2
2039070e
变更
18
展开全部
隐藏空白更改
内联
并排
Showing
18 changed file
with
813 addition
and
479 deletion
+813
-479
doc_cn/algorithm/rnn/glossary_rnn.dot
doc_cn/algorithm/rnn/glossary_rnn.dot
+42
-0
doc_cn/algorithm/rnn/glossary_rnn_with_memory.dot
doc_cn/algorithm/rnn/glossary_rnn_with_memory.dot
+48
-0
doc_cn/algorithm/rnn/hierarchical-rnn.md
doc_cn/algorithm/rnn/hierarchical-rnn.md
+0
-403
doc_cn/algorithm/rnn/hrnn_demo.rst
doc_cn/algorithm/rnn/hrnn_demo.rst
+7
-0
doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst
doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst
+230
-0
doc_cn/algorithm/rnn/simple_full_hierarchical_recurrent.dot
doc_cn/algorithm/rnn/simple_full_hierarchical_recurrent.dot
+30
-0
doc_cn/algorithm/rnn/simple_full_recurrent.dot
doc_cn/algorithm/rnn/simple_full_recurrent.dot
+19
-0
doc_cn/concepts/use_concepts.rst
doc_cn/concepts/use_concepts.rst
+1
-3
doc_cn/conf.py.in
doc_cn/conf.py.in
+1
-1
doc_cn/faq/index.rst
doc_cn/faq/index.rst
+1
-1
doc_cn/index.rst
doc_cn/index.rst
+1
-1
paddle/gserver/tests/sequenceGen.py
paddle/gserver/tests/sequenceGen.py
+10
-10
paddle/gserver/tests/sequence_nest_rnn.conf
paddle/gserver/tests/sequence_nest_rnn.conf
+2
-3
paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py
...rver/tests/sequence_nest_rnn_multi_unequalength_inputs.py
+98
-0
paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py
...e/gserver/tests/sequence_rnn_multi_unequalength_inputs.py
+35
-33
paddle/gserver/tests/test_RecurrentGradientMachine.cpp
paddle/gserver/tests/test_RecurrentGradientMachine.cpp
+9
-10
python/paddle/utils/image_multiproc.py
python/paddle/utils/image_multiproc.py
+262
-0
python/paddle/utils/image_util.py
python/paddle/utils/image_util.py
+17
-14
未找到文件。
doc_cn/algorithm/rnn/glossary_rnn.dot
0 → 100644
浏览文件 @
70c29322
digraph
G
{
subgraph
cluster_timestep0
{
label
=
"recurrent timestep i-1"
bgcolor
=
lightgray
node
[
style
=
filled
,
color
=
white
]
fc0_0
[
label
=
"fc 0"
]
fc0_1
[
label
=
"fc 1"
]
fc0_2
[
label
=
"fc 2"
]
fc0_0
->
fc0_1
fc0_1
->
fc0_2
}
subgraph
cluster_timestep1
{
label
=
"recurrent timestep i"
node
[
style
=
filled
]
;
fc1_0
[
label
=
"fc 0"
]
fc1_1
[
label
=
"fc 1"
]
fc1_2
[
label
=
"fc 2"
]
color
=
blue
fc1_0
->
fc1_1
fc1_1
->
fc1_2
}
subgraph
cluster_timestep2
{
label
=
"recurrent timestep i+1"
bgcolor
=
lightgray
node
[
style
=
filled
,
color
=
white
]
fc2_0
[
label
=
"fc 0"
]
fc2_1
[
label
=
"fc 1"
]
fc2_2
[
label
=
"fc 2"
]
fc2_0
->
fc2_1
fc2_1
->
fc2_2
}
fc0_1
->
fc1_1
[
style
=
"dotted"
constraint
=
false
]
fc1_1
->
fc2_1
[
style
=
"dotted"
constraint
=
false
]
}
\ No newline at end of file
doc_cn/algorithm/rnn/glossary_rnn_with_memory.dot
0 → 100644
浏览文件 @
70c29322
digraph
G
{
subgraph
cluster_timestep0
{
label
=
"recurrent timestep i-1"
bgcolor
=
lightgray
node
[
style
=
filled
,
color
=
white
]
fc0_0
[
label
=
"fc 0"
]
fc0_1
[
label
=
"fc 1"
]
fc0_2
[
label
=
"fc 2"
]
m0
[
label
=
"memory"
]
fc0_0
->
fc0_1
fc0_1
->
fc0_2
fc0_1
->
m0
m0
->
fc0_1
}
subgraph
cluster_timestep1
{
label
=
"recurrent timestep i"
node
[
style
=
filled
]
;
fc1_0
[
label
=
"fc 0"
]
fc1_1
[
label
=
"fc 1"
]
fc1_2
[
label
=
"fc 2"
]
m1
[
label
=
"memory"
]
color
=
blue
fc1_0
->
fc1_1
fc1_1
->
fc1_2
fc1_1
->
m1
m1
->
fc1_1
}
subgraph
cluster_timestep2
{
label
=
"recurrent timestep i+1"
bgcolor
=
lightgray
node
[
style
=
filled
,
color
=
white
]
fc2_0
[
label
=
"fc 0"
]
fc2_1
[
label
=
"fc 1"
]
fc2_2
[
label
=
"fc 2"
]
m2
[
label
=
"memory"
]
fc2_0
->
fc2_1
fc2_1
->
fc2_2
fc2_1
->
m2
m2
->
fc2_1
}
m0
->
m1
[
style
=
"dotted"
constraint
=
false
]
m1
->
m2
[
style
=
"dotted"
constraint
=
false
]
}
\ No newline at end of file
doc_cn/algorithm/rnn/hierarchical-rnn.md
已删除
100644 → 0
浏览文件 @
212162b2
此差异已折叠。
点击以展开。
doc_cn/algorithm/rnn/hrnn_demo.rst
0 → 100644
浏览文件 @
70c29322
.. _algo_hrnn_demo:
#################
双层RNN的使用示例
#################
TBD
\ No newline at end of file
doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst
0 → 100644
浏览文件 @
70c29322
.. _algo_hrnn_rnn_api_compare:
#####################
单双层RNN API对比介绍
#####################
本文以PaddlePaddle的双层RNN单元测试为示例,用多对效果完全相同的、分别使用单双层RNN作为网络配置的模型,来讲解如何使用双层RNN。本文中所有的例子,都只是介绍双层RNN的API接口,并不是使用双层RNN解决实际的问题。如果想要了解双层RNN在具体问题中的使用,请参考\ :ref:`algo_hrnn_demo`\ 。本文中示例所使用的单元测试文件是\ `test_RecurrentGradientMachine.cpp <https://github.com/reyoung/Paddle/blob/develop/paddle/gserver/tests/test_RecurrentGradientMachine.cpp>`_\ 。
示例1:双层RNN,子序列间无Memory
================================
在双层RNN中的经典情况是将内层的每一个时间序列数据,分别进行序列操作;并且内层的序列操作之间独立无依赖,即不需要使用Memory\ 。
在本示例中,单层RNN和双层RNN的网络配置,都是将每一句分好词后的句子,使用LSTM作为encoder,压缩成一个向量。区别是RNN使用两层序列模型,将多句话看成一个整体同时使用encoder压缩。二者语意上完全一致。这组语义相同的示例配置如下:
* 单层RNN\: `sequence_layer_group.conf <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_layer_group.conf>`_
* 双层RNN\: `sequence_nest_layer_group.conf <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_nest_layer_group.conf>`_
读取双层序列数据
----------------
首先,本示例中使用的原始数据如下\:
- 本例中的原始数据一共有10个样本。每个样本由两部分组成,一个label(此处都为2)和一个已经分词后的句子。这个数据也被单层RNN网络直接使用。
.. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg
:language: text
- 双层序列数据一共有4个样本。 每个样本间用空行分开,整体数据和原始数据完全一样。但于双层序列的LSTM来说,第一个样本同时encode两条数据成两个向量。这四条数据同时处理的句子数量为\ :code:`[2, 3, 2, 3]`\ 。
.. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg.nest
:language: text
其次,对于两种不同的输入数据类型,不同DataProvider对比如下(`sequenceGen.py <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequenceGen.py>`_)\:
.. literalinclude:: ../../../paddle/gserver/tests/sequenceGen.py
:language: python
:lines: 21-39
:linenos:
- 这是普通的单层时间序列的DataProvider代码,其说明如下:
* DataProvider共返回两个数据,分别是words和label。即上述代码中的第19行。
- words是原始数据中的每一句话,所对应的词表index数组。它是integer_value_sequence类型的,即整数数组。words即为这个数据中的单层时间序列。
- label是原始数据中对于每一句话的分类标签,它是integer_value类型的。
.. literalinclude:: ../../../paddle/gserver/tests/sequenceGen.py
:language: python
:lines: 42-71
:linenos:
- 对于同样的数据,双层时间序列的DataProvider的代码。其说明如下:
- DataProvider共返回两组数据,分别是sentences和labels。即在双层序列的原始数据中,每一组内的所有句子和labels
- sentences是双层时间序列的数据。由于它内部包含了每组数据中的所有句子,且每个句子表示为对应的词表索引数组,因此它是integer_value_sub_sequence 类型的,即双层时间序列。
- labels是每组内每个句子的标签,故而是一个单层时间序列。
模型配置的模型配置
------------------------------------------
首先,我们看一下单层RNN的配置。代码中9-15行(高亮部分)即为单层RNN序列的使用代码。这里使用了PaddlePaddle预定义好的RNN处理函数。在这个函数中,RNN对于每一个时间步通过了一个LSTM网络。
.. literalinclude:: ../../../paddle/gserver/tests/sequence_layer_group.conf
:language: python
:lines: 38-63
:linenos:
:emphasize-lines: 9-15
其次,我们看一下语义相同的双层RNN的网络配置\:
* PaddlePaddle中的许多layer并不在意输入是否是时间序列,例如\ :code:`embedding_layer`\ 。在这些layer中,所有的操作都是针对每一个时间步来进行的。
* 在该配置的7-26行(高亮部分),将双层时间序列数据先变换成单层时间序列数据,再对每一个单层时间序列进行处理。
* 使用\ :code:`recurrent_group`\ 这个函数进行变换,在变换时需要将输入序列传入。由于我们想要的变换是双层时间序列=> 单层时间序列,所以我们需要将输入数据标记成\ :code:`SubsequenceInput`\ 。
* 在本例中,我们将原始数据的每一组,通过\ :code:`recurrent_group`\ 进行拆解,拆解成的每一句话再通过一个LSTM网络。这和单层RNN的配置是等价的。
* 与单层RNN的配置类似,我们只需要使用LSTM encode成的最后一个向量。所以对\ :code:`recurrent_group`\ 进行了\ :code:`last_seq`\ 操作。但和单层RNN不同,我们是对每一个子序列取最后一个元素,因此\ :code:`agg_level=AggregateLevel.EACH_SEQUENCE`\ 。
* 至此,\ :code:`lstm_last`\ 便和单层RNN配置中的\ :code:`lstm_last`\ 具有相同的结果了。
.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_layer_group.conf
:language: python
:lines: 38-64
:linenos:
:emphasize-lines: 7-26
示例2:双层RNN,子序列间有Memory
================================
本示例意图使用单层RNN和双层RNN实现两个完全等价的全连接RNN。
* 对于单层RNN,输入数据为一个完整的时间序列,例如\ :code:`[4, 5, 2, 0, 9, 8, 1, 4]`\ 。
* 对于双层RNN,输入数据为在单层RNN数据里面,任意将一些数据组合成双层时间序列,例如\ :code:`[ [4, 5, 2], [0, 9], [8, 1, 4]]`。
模型配置的模型配置
------------------
我们选取单双层序列配置中的不同部分,来对比分析两者语义相同的原因。
- 单层RNN:过了一个很简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全链接。
.. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn.conf
:language: python
:lines: 36-48
- 双层RNN,外层memory是一个元素:
- 内层inner_step的recurrent_group和单层序列的几乎一样。除了boot_layer=outer_mem,表示将外层的outer_mem作为内层memory的初始状态。外层outer_step中,outer_mem是一个子句的最后一个向量,即整个双层group是将前一个子句的最后一个向量,作为下一个子句memory的初始状态。
- 从输入数据上看,单双层序列的句子是一样的,只是双层序列将其又做了子序列划分。因此双层序列的配置中,必须将前一个子句的最后一个元素,作为boot_layer传给下一个子句的memory,才能保证和单层序列的配置中“每个时间步都用了上一个时间步的输出结果”一致。
.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn.conf
:language: python
:lines: 39-66
.. warning::
PaddlePaddle目前只支持在每个时间步中,Memory的时间序列长度一致的情况。
示例3:双层RNN,输入不等长
==========================
.. role:: red
.. raw:: html
<style> .red {color:red} </style>
**输入不等长** 是指recurrent_group的多个输入序列,在每个时间步的子序列长度可以不相等。但序列输出时,需要指定与某一个输入的序列信息是一致的。使用\ :red:`targetInlink`\ 可以指定哪一个输入和输出序列信息一致,默认指定第一个输入。
示例3的配置分别为\ `单层不等长RNN <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.conf>`_\ 和\ `双层不等长RNN <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.conf>`_\ 。
示例3对于单层RNN和双层RNN数据完全相同。
* 对于单层RNN的数据一共有两个样本,他们分别是\ :code:`[1, 2, 4, 5, 2], [5, 4, 1, 3, 1]`\ 和\ :code:`[0, 2, 2, 5, 0, 1, 2], [1, 5, 4, 2, 3, 6, 1]`\ 。对于每一个单层RNN的数据,均有两组特征。
* 在单层数据的基础上,双层RNN数据随意加了一些隔断,例如将第一条数据转化为\ :code:`[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]]`\ 。
* 需要注意的是PaddlePaddle目前只支持子序列数目一样的多输入双层RNN。例如本例中的两个特征,均有三个子序列。每个子序列长度可以不一致,但是子序列的数目必须一样。
模型配置
--------
和示例2中的配置类似,示例3的配置使用了单层RNN和双层RNN,实现两个完全等价的全连接RNN。
* 单层RNN\:
.. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py
:language: python
:lines: 42-59
:linenos:
* 双层RNN\ \:
.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py
:language: python
:lines: 41-80
:linenos:
在上面代码中,单层和双层序列的使用和示例2中的示例类似,区别是同时处理了两个输入。而对于双层序列,两个输入的子序列长度也并不相同。但是,我们使用了\ :code:`targetInlink`\ 参数设置了外层\ :code:`recurrent_group`\ 的输出格式。所以外层输出的序列形状,和\ :code:`emb2`\ 的序列形状一致。
示例4:beam_search的生成
========================
TBD
词汇表
======
.. _glossary_memory:
Memory
------
Memory是PaddlePaddle实现RNN时候使用的一个概念。RNN即时间递归神经网络,通常要求时间步之间具有一些依赖性,即当前时间步下的神经网络依赖前一个时间步神经网络中某一个神经元输出。如下图所示。
.. graphviz:: glossary_rnn.dot
上图中虚线的连接,即是跨越时间步的网络连接。PaddlePaddle在实现RNN的时候,将这种跨越时间步的连接用一个特殊的神经网络单元实现。这个神经网络单元就叫Memory。Memory可以缓存上一个时刻某一个神经元的输出,然后在下一个时间步输入给另一个神经元。使用Memory的RNN实现便如下图所示。
.. graphviz:: glossary_rnn_with_memory.dot
使用这种方式,PaddlePaddle可以比较简单的判断哪些输出是应该跨越时间步的,哪些不是。
.. _glossary_timestep:
时间步
------
参考时间序列。
.. _glossary_sequence:
时间序列
--------
时间序列(time series)是指一系列的特征数据。这些特征数据之间的顺序是有意义的。即特征的数组,而不是特征的集合。而这每一个数组元素,或者每一个系列里的特征数据,即为一个时间步(time step)。值得注意的是,时间序列、时间步的概念,并不真正的和『时间』有关。只要一系列特征数据中的『顺序』是有意义的,即为时间序列的输入。
举例说明,例如文本分类中,我们通常将一句话理解成一个时间序列。比如一句话中的每一个单词,会变成词表中的位置。而这一句话就可以表示成这些位置的数组。例如 :code:`[9, 2, 3, 5, 3]` 。
关于时间序列(time series)的更详细准确的定义,可以参考 `维基百科页面 Time series <https://en.wikipedia.org/wiki/Time_series>`_ 或者 `维基百科中文页面 时间序列 <https://zh.wikipedia.org/wiki/%E6%99%82%E9%96%93%E5%BA%8F%E5%88%97>`_ 。
另外,Paddle中经常会将时间序列成为 :code:`Sequence` 。他们在Paddle的文档和API中是一个概念。
.. _glossary_RNN:
RNN
---
RNN 在PaddlePaddle的文档中,一般表示 :code:`Recurrent neural network`,即时间递归神经网络。详细介绍可以参考 `维基百科页面 Recurrent neural network <https://en.wikipedia.org/wiki/Recurrent_neural_network>`_ 或者 `中文维基百科页面 <https://zh.wikipedia.org/wiki/%E9%80%92%E5%BD%92%E7%A5%9E%E7%BB%8F%E7%BD%91%E7%BB%9C>`_ 中关于时间递归神经网络的介绍。
RNN 一般在PaddlePaddle中,指对于一个时间序列输入数据,每一个时间步之间的神经网络具有一定的相关性。例如,某一个神经元的一个输入为上一个时间步网络中某一个神经元的输出。或者,从每一个时间步来看,神经网络的网络结构中具有有向环结构。
.. _glossary_双层RNN:
双层RNN
-------
双层RNN顾名思义,即RNN之间有一次嵌套关系。输入数据整体上是一个时间序列,而对于每一个内层特征数据而言,也是一个时间序列。即二维数组,或者数组的数组这个概念。 而双层RNN是可以处理这种输入数据的网络结构。
例如,对于段落的文本分类,即将一段话进行分类。我们将一段话看成句子的数组,每个句子又是单词的数组。这便是一种双层RNN的输入数据。而将这个段落的每一句话用lstm编码成一个向量,再对每一句话的编码向量用lstm编码成一个段落的向量。再对这个段落向量进行分类,即为这个双层RNN的网络结构。
doc_cn/algorithm/rnn/simple_full_hierarchical_recurrent.dot
0 → 100644
浏览文件 @
70c29322
digraph
G
{
rankdir
=
LR
;
subgraph
cluster_t0
{
a
[
label
=
"4"
]
b
[
label
=
"5"
]
c
[
label
=
"2"
]
}
subgraph
cluster_t1
{
d
[
label
=
"0"
]
e
[
label
=
"9"
]
}
subgraph
cluster_t2
{
f
[
label
=
"8"
]
g
[
label
=
"1"
]
h
[
label
=
"4"
]
}
a
->
b
;
b
->
c
;
c
->
d
[
constraint
=
false
]
;
d
->
e
;
e
->
f
[
constraint
=
false
]
;
f
->
g
;
g
->
h
;
}
\ No newline at end of file
doc_cn/algorithm/rnn/simple_full_recurrent.dot
0 → 100644
浏览文件 @
70c29322
digraph
G
{
rankdir
=
LR
;
a
[
label
=
"4"
]
b
[
label
=
"5"
]
c
[
label
=
"2"
]
d
[
label
=
"0"
]
e
[
label
=
"9"
]
f
[
label
=
"8"
]
g
[
label
=
"1"
]
h
[
label
=
"4"
]
a
->
b
;
b
->
c
;
c
->
d
;
d
->
e
;
e
->
f
;
f
->
g
;
g
->
h
;
}
\ No newline at end of file
doc_cn/concepts/use_concepts.rst
浏览文件 @
70c29322
...
...
@@ -93,7 +93,7 @@ DataProvider是PaddlePaddle系统的数据提供器,将用户的原始数据
- ``outputs``: 标记网络输出的函数为 ``outputs`` 。
训练阶段,网络的输出为神经网络的优化目标;预测阶段,网络的输出也可通过 ``outputs`` 标记。
训练阶段,网络的输出为神经网络的优化目标;预测阶段,网络的输出也可通过 ``outputs`` 标记。
这里对 ``mixed_layer`` 稍做详细说明, 该Layer将多个输入(Projection 或 Operator)累加求和,具体计算是通过内部的 Projection 和 Operator 完成,然后加 Bias 和 activation 操作,
...
...
@@ -152,6 +152,4 @@ PaddlePaddle多机采用经典的 Parameter Server 架构对多个节点的 trai
.. _损失函数层: ../../doc/ui/api/trainer_config_helpers/layers.html#cost-layers
.. _评估器: ../../doc/ui/api/trainer_config_helpers/evaluators.html
.. _mixed_layer: ../../doc/ui/api/trainer_config_helpers/layers.html#mixed-layer
.. _masking-gpu: http://www.acceleware.com/blog/cudavisibledevices-masking-gpus
.. _集群训练Paddle: ../cluster/index.html
doc_cn/conf.py.in
浏览文件 @
70c29322
...
...
@@ -69,7 +69,7 @@ master_doc = 'index'
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language =
None
language =
'zh_CN'
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
...
...
doc_cn/faq/index.rst
浏览文件 @
70c29322
...
...
@@ -274,7 +274,7 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字
例如机器上有4块GPU,编号从0开始,指定使用2、3号GPU:
* 方式1:通过 `
`CUDA_VISIBLE_DEVICES``
环境变量来指定特定的GPU。
* 方式1:通过 `
CUDA_VISIBLE_DEVICES <http://www.acceleware.com/blog/cudavisibledevices-masking-gpus>`_
环境变量来指定特定的GPU。
.. code-block:: bash
...
...
doc_cn/index.rst
浏览文件 @
70c29322
...
...
@@ -23,7 +23,7 @@ PaddlePaddle文档
* `Recurrent Group教程 <algorithm/rnn/rnn-tutorial.html>`_
* `单层RNN示例 <../doc/algorithm/rnn/rnn.html>`_
*
`双层RNN示例 <algorithm/rnn/hierarchical-rnn.html>`_
*
:ref:`algo_hrnn_rnn_api_compare`
* `支持双层序列作为输入的Layer <algorithm/rnn/hierarchical-layer.html>`_
常见问题
...
...
paddle/gserver/tests/sequenceGen.py
浏览文件 @
70c29322
...
...
@@ -33,10 +33,10 @@ def process(settings, file_name):
label
,
comment
=
line
.
strip
().
split
(
'
\t
'
)
label
=
int
(
''
.
join
(
label
.
split
()))
words
=
comment
.
split
()
word
_slot
=
[
word
s
=
[
settings
.
word_dict
[
w
]
for
w
in
words
if
w
in
settings
.
word_dict
]
yield
word
_slot
,
label
yield
word
s
,
label
## for hierarchical sequence network
...
...
@@ -52,20 +52,20 @@ def hook2(settings, dict_file, **kwargs):
@
provider
(
init_hook
=
hook2
,
should_shuffle
=
False
)
def
process2
(
settings
,
file_name
):
with
open
(
file_name
)
as
fdata
:
label
_list
=
[]
word_slot_list
=
[]
label
s
=
[]
sentences
=
[]
for
line
in
fdata
:
if
(
len
(
line
))
>
1
:
label
,
comment
=
line
.
strip
().
split
(
'
\t
'
)
label
=
int
(
''
.
join
(
label
.
split
()))
words
=
comment
.
split
()
word
_slot
=
[
word
s
=
[
settings
.
word_dict
[
w
]
for
w
in
words
if
w
in
settings
.
word_dict
]
label
_list
.
append
(
label
)
word_slot_list
.
append
(
word_slot
)
label
s
.
append
(
label
)
sentences
.
append
(
words
)
else
:
yield
word_slot_list
,
label_list
label
_list
=
[]
word_slot_list
=
[]
yield
sentences
,
labels
label
s
=
[]
sentences
=
[]
paddle/gserver/tests/sequence_nest_rnn.conf
浏览文件 @
70c29322
...
...
@@ -55,9 +55,8 @@ def outer_step(x):
input
=
x
)
last
=
last_seq
(
input
=
inner_rnn_output
,
name
=
"outer_rnn_state"
)
# "return last" should also work. But currently RecurrentGradientMachine
# does not handle it, and will report error: In hierachical RNN, all out
# links should be from sequences now.
# "return last" won't work, because recurrent_group only support the input
# sequence type is same as return sequence type.
return
inner_rnn_output
out
=
recurrent_group
(
...
...
paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.
conf
→
paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.
py
浏览文件 @
70c29322
#edit-mode: -*- python -*-
#
edit-mode: -*- python -*-
# Copyright (c) 2016 Baidu, Inc. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
...
...
@@ -16,11 +16,11 @@
from
paddle.trainer_config_helpers
import
*
######################## data source ################################
define_py_data_sources2
(
train_list
=
'gserver/tests/Sequence/dummy.list'
,
test_list
=
None
,
module
=
'rnn_data_provider'
,
obj
=
'process_unequalength_subseq'
)
define_py_data_sources2
(
train_list
=
'gserver/tests/Sequence/dummy.list'
,
test_list
=
None
,
module
=
'rnn_data_provider'
,
obj
=
'process_unequalength_subseq'
)
settings
(
batch_size
=
2
,
learning_rate
=
0.01
)
######################## network configure ################################
...
...
@@ -35,49 +35,40 @@ speaker2 = data_layer(name="word2", size=dict_dim)
emb1
=
embedding_layer
(
input
=
speaker1
,
size
=
word_dim
)
emb2
=
embedding_layer
(
input
=
speaker2
,
size
=
word_dim
)
# This hierachical RNN is designed to be equivalent to the simple RNN in
# sequence_rnn_multi_unequalength_inputs.conf
# This hierarchical RNN is designed to be equivalent to the simple RNN in
# sequence_rnn_multi_unequalength_inputs.conf
def
outer_step
(
x1
,
x2
):
outer_mem1
=
memory
(
name
=
"outer_rnn_state1"
,
size
=
hidden_dim
)
outer_mem2
=
memory
(
name
=
"outer_rnn_state2"
,
size
=
hidden_dim
)
def
inner_step1
(
y
):
inner_mem
=
memory
(
name
=
'inner_rnn_state_'
+
y
.
name
,
size
=
hidden_dim
,
boot_layer
=
outer_mem1
)
out
=
fc_layer
(
input
= [
y
,
inner_mem
],
size
=
hidden_dim
,
act
=
TanhActivation
(),
bias_attr
=
True
,
name
=
'inner_rnn_state_'
+
y
.
name
)
return
out
def
inner_step2
(
y
):
inner_mem
=
memory
(
name
=
'inner_rnn_state_'
+
y
.
name
,
size
=
hidden_dim
,
boot_layer
=
outer_mem2
)
out
=
fc_layer
(
input
= [
y
,
inner_mem
],
size
=
hidden_dim
,
act
=
TanhActivation
(),
bias_attr
=
True
,
name
=
'inner_rnn_state_'
+
y
.
name
)
return
out
encoder1
=
recurrent_group
(
step
=
inner_step1
,
name
=
'inner1'
,
input
=
x1
)
encoder2
=
recurrent_group
(
step
=
inner_step2
,
name
=
'inner2'
,
input
=
x2
)
sentence_last_state1
=
last_seq
(
input
=
encoder1
,
name
=
'outer_rnn_state1'
)
sentence_last_state2_
=
last_seq
(
input
=
encoder2
,
name
=
'outer_rnn_state2'
)
encoder1_expand
=
expand_layer
(
input
=
sentence_last_state1
,
expand_as
=
encoder2
)
index
=
[
0
]
def
inner_step
(
ipt
):
index
[
0
]
+=
1
i
=
index
[
0
]
outer_mem
=
memory
(
name
=
"outer_rnn_state_%d"
%
i
,
size
=
hidden_dim
)
def
inner_step_impl
(
y
):
inner_mem
=
memory
(
name
=
"inner_rnn_state_"
+
y
.
name
,
size
=
hidden_dim
,
boot_layer
=
outer_mem
)
out
=
fc_layer
(
input
=
[
y
,
inner_mem
],
size
=
hidden_dim
,
act
=
TanhActivation
(),
bias_attr
=
True
,
name
=
'inner_rnn_state_'
+
y
.
name
)
return
out
encoder
=
recurrent_group
(
step
=
inner_step_impl
,
name
=
'inner_%d'
%
i
,
input
=
ipt
)
last
=
last_seq
(
name
=
"outer_rnn_state_%d"
%
i
,
input
=
encoder
)
return
encoder
,
last
encoder1
,
sentence_last_state1
=
inner_step
(
ipt
=
x1
)
encoder2
,
sentence_last_state2
=
inner_step
(
ipt
=
x2
)
encoder1_expand
=
expand_layer
(
input
=
sentence_last_state1
,
expand_as
=
encoder2
)
return
[
encoder1_expand
,
encoder2
]
...
...
@@ -88,19 +79,20 @@ encoder1_rep, encoder2_rep = recurrent_group(
input
=
[
SubsequenceInput
(
emb1
),
SubsequenceInput
(
emb2
)],
targetInlink
=
emb2
)
encoder1_last
=
last_seq
(
input
=
encoder1_rep
)
encoder1_expandlast
=
expand_layer
(
input
=
encoder1_last
,
expand_as
=
encoder2_rep
)
context
=
mixed_layer
(
input
= [
identity_projection
(
encoder1_expandlast
),
identity_projection
(
encoder2_rep
)],
size
=
hidden_dim
)
encoder1_last
=
last_seq
(
input
=
encoder1_rep
)
encoder1_expandlast
=
expand_layer
(
input
=
encoder1_last
,
expand_as
=
encoder2_rep
)
context
=
mixed_layer
(
input
=
[
identity_projection
(
encoder1_expandlast
),
identity_projection
(
encoder2_rep
)
],
size
=
hidden_dim
)
rep
=
last_seq
(
input
=
context
)
prob
=
fc_layer
(
size
=
label_dim
,
input
=
rep
,
act
=
SoftmaxActivation
(),
bias_attr
=
True
)
outputs
(
classification_cost
(
input
=
prob
,
label
=
data_layer
(
name
=
"label"
,
size
=
label_dim
)))
prob
=
fc_layer
(
size
=
label_dim
,
input
=
rep
,
act
=
SoftmaxActivation
(),
bias_attr
=
True
)
outputs
(
classification_cost
(
input
=
prob
,
label
=
data_layer
(
name
=
"label"
,
size
=
label_dim
)))
paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.
conf
→
paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.
py
浏览文件 @
70c29322
...
...
@@ -16,11 +16,11 @@
from
paddle.trainer_config_helpers
import
*
######################## data source ################################
define_py_data_sources2
(
train_list
=
'gserver/tests/Sequence/dummy.list'
,
test_list
=
None
,
module
=
'rnn_data_provider'
,
obj
=
'process_unequalength_seq'
)
define_py_data_sources2
(
train_list
=
'gserver/tests/Sequence/dummy.list'
,
test_list
=
None
,
module
=
'rnn_data_provider'
,
obj
=
'process_unequalength_seq'
)
settings
(
batch_size
=
2
,
learning_rate
=
0.01
)
######################## network configure ################################
...
...
@@ -38,38 +38,40 @@ emb2 = embedding_layer(input=speaker2, size=word_dim)
# This hierachical RNN is designed to be equivalent to the RNN in
# sequence_nest_rnn_multi_unequalength_inputs.conf
def
step
(
x1
,
x2
):
def
calrnn
(
y
):
mem
=
memory
(
name
=
'rnn_state_'
+
y
.
name
,
size
=
hidden_dim
)
out
=
fc_layer
(
input
= [
y
,
mem
],
size
=
hidden_dim
,
act
=
TanhActivation
(),
bias_attr
=
True
,
name
=
'rnn_state_'
+
y
.
name
)
return
out
encoder1
=
calrnn
(
x1
)
encoder2
=
calrnn
(
x2
)
return
[
encoder1
,
encoder2
]
def
calrnn
(
y
):
mem
=
memory
(
name
=
'rnn_state_'
+
y
.
name
,
size
=
hidden_dim
)
out
=
fc_layer
(
input
=
[
y
,
mem
],
size
=
hidden_dim
,
act
=
TanhActivation
(),
bias_attr
=
True
,
name
=
'rnn_state_'
+
y
.
name
)
return
out
encoder1
=
calrnn
(
x1
)
encoder2
=
calrnn
(
x2
)
return
[
encoder1
,
encoder2
]
encoder1_rep
,
encoder2_rep
=
recurrent_group
(
name
=
"stepout"
,
step
=
step
,
input
=[
emb1
,
emb2
])
name
=
"stepout"
,
step
=
step
,
input
=
[
emb1
,
emb2
])
encoder1_last
=
last_seq
(
input
=
encoder1_rep
)
encoder1_expandlast
=
expand_layer
(
input
=
encoder1_last
,
expand_as
=
encoder2_rep
)
context
=
mixed_layer
(
input
= [
identity_projection
(
encoder1_expandlast
),
identity_projection
(
encoder2_rep
)],
size
=
hidden_dim
)
encoder1_last
=
last_seq
(
input
=
encoder1_rep
)
encoder1_expandlast
=
expand_layer
(
input
=
encoder1_last
,
expand_as
=
encoder2_rep
)
context
=
mixed_layer
(
input
=
[
identity_projection
(
encoder1_expandlast
),
identity_projection
(
encoder2_rep
)
],
size
=
hidden_dim
)
rep
=
last_seq
(
input
=
context
)
prob
=
fc_layer
(
size
=
label_dim
,
input
=
rep
,
act
=
SoftmaxActivation
(),
bias_attr
=
True
)
outputs
(
classification_cost
(
input
=
prob
,
label
=
data_layer
(
name
=
"label"
,
size
=
label_dim
)))
prob
=
fc_layer
(
size
=
label_dim
,
input
=
rep
,
act
=
SoftmaxActivation
(),
bias_attr
=
True
)
outputs
(
classification_cost
(
input
=
prob
,
label
=
data_layer
(
name
=
"label"
,
size
=
label_dim
)))
paddle/gserver/tests/test_RecurrentGradientMachine.cpp
浏览文件 @
70c29322
...
...
@@ -13,12 +13,12 @@ See the License for the specific language governing permissions and
limitations under the License. */
#include <gtest/gtest.h>
#include <paddle/utils/Util.h>
#include <paddle/utils/Version.h>
#include <paddle/utils/PythonUtil.h>
#include <paddle/gserver/gradientmachines/GradientMachine.h>
#include <paddle/trainer/Trainer.h>
#include <paddle/trainer/TrainerInternal.h>
#include <paddle/gserver/gradientmachines/GradientMachine.h>
#include <paddle/utils/PythonUtil.h>
#include <paddle/utils/Util.h>
#include <paddle/utils/Version.h>
P_DECLARE_int32
(
seed
);
...
...
@@ -45,10 +45,9 @@ public:
auto
p
=
const_cast
<
TrainerForTest
*>
(
this
);
auto
&
params
=
p
->
getGradientMachine
()
->
getParameters
();
return
std
::
accumulate
(
params
.
begin
(),
params
.
end
(),
0UL
,
[](
size_t
a
,
const
ParameterPtr
&
p
)
{
return
a
+
p
->
getSize
();
});
params
.
begin
(),
params
.
end
(),
0UL
,
[](
size_t
a
,
const
ParameterPtr
&
p
)
{
return
a
+
p
->
getSize
();
});
}
};
...
...
@@ -148,8 +147,8 @@ TEST(RecurrentGradientMachine, rnn_multi_input) {
TEST
(
RecurrentGradientMachine
,
rnn_multi_unequalength_input
)
{
for
(
bool
useGpu
:
{
false
,
true
})
{
test
(
"gserver/tests/sequence_rnn_multi_unequalength_inputs.
conf
"
,
"gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.
conf
"
,
test
(
"gserver/tests/sequence_rnn_multi_unequalength_inputs.
py
"
,
"gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.
py
"
,
1e-6
,
useGpu
);
}
...
...
python/paddle/utils/image_multiproc.py
0 → 100644
浏览文件 @
70c29322
import
os
,
sys
import
numpy
as
np
from
PIL
import
Image
from
cStringIO
import
StringIO
import
multiprocessing
import
functools
import
itertools
from
paddle.utils.image_util
import
*
from
paddle.trainer.config_parser
import
logger
try
:
import
cv2
except
ImportError
:
logger
.
warning
(
"OpenCV2 is not installed, using PIL to prcoess"
)
cv2
=
None
__all__
=
[
"CvTransformer"
,
"PILTransformer"
,
"MultiProcessImageTransformer"
]
class
CvTransformer
(
ImageTransformer
):
"""
CvTransformer used python-opencv to process image.
"""
def
__init__
(
self
,
min_size
=
None
,
crop_size
=
None
,
transpose
=
(
2
,
0
,
1
),
# transpose to C * H * W
channel_swap
=
None
,
mean
=
None
,
is_train
=
True
,
is_color
=
True
):
ImageTransformer
.
__init__
(
self
,
transpose
,
channel_swap
,
mean
,
is_color
)
self
.
min_size
=
min_size
self
.
crop_size
=
crop_size
self
.
is_train
=
is_train
def
resize
(
self
,
im
,
min_size
):
row
,
col
=
im
.
shape
[:
2
]
new_row
,
new_col
=
min_size
,
min_size
if
row
>
col
:
new_row
=
min_size
*
row
/
col
else
:
new_col
=
min_size
*
col
/
row
im
=
cv2
.
resize
(
im
,
(
new_row
,
new_col
),
interpolation
=
cv2
.
INTER_CUBIC
)
return
im
def
crop_and_flip
(
self
,
im
):
"""
Return cropped image.
The size of the cropped image is inner_size * inner_size.
im: (H x W x K) ndarrays
"""
row
,
col
=
im
.
shape
[:
2
]
start_h
,
start_w
=
0
,
0
if
self
.
is_train
:
start_h
=
np
.
random
.
randint
(
0
,
row
-
self
.
crop_size
+
1
)
start_w
=
np
.
random
.
randint
(
0
,
col
-
self
.
crop_size
+
1
)
else
:
start_h
=
(
row
-
self
.
crop_size
)
/
2
start_w
=
(
col
-
self
.
crop_size
)
/
2
end_h
,
end_w
=
start_h
+
self
.
crop_size
,
start_w
+
self
.
crop_size
if
self
.
is_color
:
im
=
im
[
start_h
:
end_h
,
start_w
:
end_w
,
:]
else
:
im
=
im
[
start_h
:
end_h
,
start_w
:
end_w
]
if
(
self
.
is_train
)
and
(
np
.
random
.
randint
(
2
)
==
0
):
if
self
.
is_color
:
im
=
im
[:,
::
-
1
,
:]
else
:
im
=
im
[:,
::
-
1
]
return
im
def
transform
(
self
,
im
):
im
=
self
.
resize
(
im
,
self
.
min_size
)
im
=
self
.
crop_and_flip
(
im
)
# transpose, swap channel, sub mean
im
=
im
.
astype
(
'float32'
)
ImageTransformer
.
transformer
(
self
,
im
)
return
im
def
load_image_from_string
(
self
,
data
):
flag
=
cv2
.
CV_LOAD_IMAGE_COLOR
if
self
.
is_color
else
cv2
.
CV_LOAD_IMAGE_GRAYSCALE
im
=
cv2
.
imdecode
(
np
.
fromstring
(
data
,
np
.
uint8
),
flag
)
return
im
def
transform_from_string
(
self
,
data
):
im
=
self
.
load_image_from_string
(
data
)
return
self
.
transform
(
im
)
def
load_image_from_file
(
self
,
file
):
flag
=
cv2
.
CV_LOAD_IMAGE_COLOR
if
self
.
is_color
else
cv2
.
CV_LOAD_IMAGE_GRAYSCALE
im
=
cv2
.
imread
(
file
,
flag
)
return
im
def
transform_from_file
(
self
,
file
):
im
=
self
.
load_image_from_file
(
file
)
return
self
.
transform
(
im
)
class
PILTransformer
(
ImageTransformer
):
"""
PILTransformer used PIL to process image.
"""
def
__init__
(
self
,
min_size
=
None
,
crop_size
=
None
,
transpose
=
(
2
,
0
,
1
),
# transpose to C * H * W
channel_swap
=
None
,
mean
=
None
,
is_train
=
True
,
is_color
=
True
):
ImageTransformer
.
__init__
(
self
,
transpose
,
channel_swap
,
mean
,
is_color
)
self
.
min_size
=
min_size
self
.
crop_size
=
crop_size
self
.
is_train
=
is_train
def
resize
(
self
,
im
,
min_size
):
row
,
col
=
im
.
size
[:
2
]
new_row
,
new_col
=
min_size
,
min_size
if
row
>
col
:
new_row
=
min_size
*
row
/
col
else
:
new_col
=
min_size
*
col
/
row
im
=
im
.
resize
((
new_row
,
new_col
),
Image
.
ANTIALIAS
)
return
im
def
crop_and_flip
(
self
,
im
):
"""
Return cropped image.
The size of the cropped image is inner_size * inner_size.
"""
row
,
col
=
im
.
size
[:
2
]
start_h
,
start_w
=
0
,
0
if
self
.
is_train
:
start_h
=
np
.
random
.
randint
(
0
,
row
-
self
.
crop_size
+
1
)
start_w
=
np
.
random
.
randint
(
0
,
col
-
self
.
crop_size
+
1
)
else
:
start_h
=
(
row
-
self
.
crop_size
)
/
2
start_w
=
(
col
-
self
.
crop_size
)
/
2
end_h
,
end_w
=
start_h
+
self
.
crop_size
,
start_w
+
self
.
crop_size
im
=
im
.
crop
((
start_h
,
start_w
,
end_h
,
end_w
))
if
(
self
.
is_train
)
and
(
np
.
random
.
randint
(
2
)
==
0
):
im
=
im
.
transpose
(
Image
.
FLIP_LEFT_RIGHT
)
return
im
def
transform
(
self
,
im
):
im
=
self
.
resize
(
im
,
self
.
min_size
)
im
=
self
.
crop_and_flip
(
im
)
im
=
np
.
array
(
im
,
dtype
=
np
.
float32
)
# convert to numpy.array
# transpose, swap channel, sub mean
ImageTransformer
.
transformer
(
self
,
im
)
return
im
def
load_image_from_string
(
self
,
data
):
im
=
Image
.
open
(
StringIO
(
data
))
return
im
def
transform_from_string
(
self
,
data
):
im
=
self
.
load_image_from_string
(
data
)
return
self
.
transform
(
im
)
def
load_image_from_file
(
self
,
file
):
im
=
Image
.
open
(
file
)
return
im
def
transform_from_file
(
self
,
file
):
im
=
self
.
load_image_from_file
(
file
)
return
self
.
transform
(
im
)
def
job
(
is_img_string
,
transformer
,
(
data
,
label
)):
if
is_img_string
:
return
transformer
.
transform_from_string
(
data
),
label
else
:
return
transformer
.
transform_from_file
(
data
),
label
class
MultiProcessImageTransformer
(
object
):
def
__init__
(
self
,
procnum
=
10
,
resize_size
=
None
,
crop_size
=
None
,
transpose
=
(
2
,
0
,
1
),
channel_swap
=
None
,
mean
=
None
,
is_train
=
True
,
is_color
=
True
,
is_img_string
=
True
):
"""
Processing image with multi-process. If it is used in PyDataProvider,
the simple usage for CNN is as follows:
.. code-block:: python
def hool(settings, is_train, **kwargs):
settings.is_train = is_train
settings.mean_value = np.array([103.939,116.779,123.68], dtype=np.float32)
settings.input_types = [
dense_vector(3 * 224 * 224),
integer_value(1)]
settings.transformer = MultiProcessImageTransformer(
procnum=10,
resize_size=256,
crop_size=224,
transpose=(2, 0, 1),
mean=settings.mean_values,
is_train=settings.is_train)
@provider(init_hook=hook, pool_size=20480)
def process(settings, file_list):
with open(file_list, 'r') as fdata:
for line in fdata:
data_dic = np.load(line.strip()) # load the data batch pickled by Pickle.
data = data_dic['data']
labels = data_dic['label']
labels = np.array(labels, dtype=np.float32)
for im, lab in settings.dp.run(data, labels):
yield [im.astype('float32'), int(lab)]
:param procnum: processor number.
:type procnum: int
:param resize_size: the shorter edge size of image after resizing.
:type resize_size: int
:param crop_size: the croping size.
:type crop_size: int
:param transpose: the transpose order, Paddle only allow C * H * W order.
:type transpose: tuple or list
:param channel_swap: the channel swap order, RGB or BRG.
:type channel_swap: tuple or list
:param mean: the mean values of image, per-channel mean or element-wise mean.
:type mean: array, The dimension is 1 for per-channel mean.
The dimension is 3 for element-wise mean.
:param is_train: training peroid or testing peroid.
:type is_train: bool.
:param is_color: the image is color or gray.
:type is_color: bool.
:param is_img_string: The input can be the file name of image or image string.
:type is_img_string: bool.
"""
self
.
procnum
=
procnum
self
.
pool
=
multiprocessing
.
Pool
(
procnum
)
self
.
is_img_string
=
is_img_string
if
cv2
is
not
None
:
self
.
transformer
=
CvTransformer
(
resize_size
,
crop_size
,
transpose
,
channel_swap
,
mean
,
is_train
,
is_color
)
else
:
self
.
transformer
=
PILTransformer
(
resize_size
,
crop_size
,
transpose
,
channel_swap
,
mean
,
is_train
,
is_color
)
def
run
(
self
,
data
,
label
):
fun
=
functools
.
partial
(
job
,
self
.
is_img_string
,
self
.
transformer
)
return
self
.
pool
.
imap_unordered
(
fun
,
itertools
.
izip
(
data
,
label
),
chunksize
=
100
*
self
.
procnum
)
python/paddle/utils/image_util.py
浏览文件 @
70c29322
...
...
@@ -186,29 +186,32 @@ class ImageTransformer:
channel_swap
=
None
,
mean
=
None
,
is_color
=
True
):
self
.
transpose
=
transpose
self
.
channel_swap
=
None
self
.
mean
=
None
self
.
is_color
=
is_color
self
.
set_transpose
(
transpose
)
self
.
set_channel_swap
(
channel_swap
)
self
.
set_mean
(
mean
)
def
set_transpose
(
self
,
order
):
if
self
.
is_color
:
assert
3
==
len
(
order
)
if
order
is
not
None
:
if
self
.
is_color
:
assert
3
==
len
(
order
)
self
.
transpose
=
order
def
set_channel_swap
(
self
,
order
):
if
self
.
is_color
:
assert
3
==
len
(
order
)
if
order
is
not
None
:
if
self
.
is_color
:
assert
3
==
len
(
order
)
self
.
channel_swap
=
order
def
set_mean
(
self
,
mean
):
# mean value, may be one value per channel
if
mean
.
ndim
==
1
:
mean
=
mean
[:,
np
.
newaxis
,
np
.
newaxis
]
else
:
# elementwise mean
if
self
.
is_color
:
assert
len
(
mean
.
shape
)
==
3
if
mean
is
not
None
:
# mean value, may be one value per channel
if
mean
.
ndim
==
1
:
mean
=
mean
[:,
np
.
newaxis
,
np
.
newaxis
]
else
:
# elementwise mean
if
self
.
is_color
:
assert
len
(
mean
.
shape
)
==
3
self
.
mean
=
mean
def
transformer
(
self
,
data
):
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录