lod_tensor.md 6.5 KB
Newer Older
Y
Yi Wang 已提交
1
# Design Doc: LoD (Level-of-Detail) Tensor
Y
Yi Wang 已提交
2

M
Markus Kliegl 已提交
3
Like other deep learning systems, PaddlePaddle supports training models from sequence data.  Also, like other systems, PaddlePaddle represent a mini-batch of sequences as a Tensor.  What is different is that PaddlePaddle doesn't require all sequences in a mini-batch to be of the same length. Thus no need for padding zeros.
Y
Yi Wang 已提交
4

_青葱's avatar
_青葱 已提交
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
<table>
<thead>
<tr>
<th></th>
<th>TensorFlow</th>
<th>PaddlePaddle</th>
</tr>
</thead>
<tbody>
<tr>
<td>RNN </td>
<td>Support </td>
<td>Support </td>
</tr>
<tr>
<td>recursive RNN </td>
<td>Support </td>
<td>Support </td>
</tr>
<tr>
<td>padding zeros </td>
<td> Must </td>
<td>No need </td>
</tr>
<tr>
<td> blob data type </td>
<td> Tensor</td>
<td> LoDTensor </td>
</tr>
</tbody>
</table>

Y
Yi Wang 已提交
37

M
Markus Kliegl 已提交
38
PaddlePaddle achieves this flexibility by passing through a new data type, *LoD Tensor*, which is a Tensor attached with segmentation index known as *LoD*, between operators.  The LoD index doesn't only segment a tensor, but also recursively segments sub-sequences.  This document presents the design of LoD and LoDTensor.
Y
Yi Wang 已提交
39 40


Y
Yi Wang 已提交
41
## The Challenge: Variable-length Sequences
Y
Yi Wang 已提交
42

Y
Yi Wang 已提交
43
Most deep learning systems represent a mini-batch as a Tensor.  For example, a mini-batch of 10 images, each of size 32x32, is a 10x32x32 Tensor.  Another example is that each mini-batch contains N sentences, where each word is a D-dimensional one-hot vector.  Suppose that all sentences have the same length L, we can represent this mini-batch by a NxLxD tensor.
Y
Yi Wang 已提交
44

M
Markus Kliegl 已提交
45
Both examples show that the elements of sequences are usually of the same size.  In the first example, all images are 32x32, and in the second one, all words are D-dimensional vectors.  It doesn't make sense to allow variable-sized images, as that would require transformations like convolution to handle variable-sized Tensors.
Y
Yi Wang 已提交
46 47 48

The real challenge is that in most cases, sentences have variable lengths, and we will need an index data structure to segment the tensor into sequences.  Also, sequences might consist of sub-sequences.

Y
Yi Wang 已提交
49

Y
Yi Wang 已提交
50 51
## A Solution: The LoD Index

M
Markus Kliegl 已提交
52
To understand our solution, it is best to look at some examples.
Y
Yi Wang 已提交
53 54 55

### A Mini-Batch of Sentences

M
Markus Kliegl 已提交
56
Let's imagine a mini-batch of 3 variable lengths sentences composed of 3, 1, and 2 words, respectively.  We can represent the mini-batch by a (3+1+2)xD tensor plus some index information:
Y
Yi Wang 已提交
57 58 59 60 61 62

```
3   1 2
||| | ||
```

Y
Yi Wang 已提交
63 64 65 66
where each `|` represents a D-dimensional word vector.  The numbers, 3, 1, and 2, form a 1-level LoD.

### Recursive Sequences

M
Markus Kliegl 已提交
67
Let check another example of a 2-level LoD Tensor.  Consider a mini-batch of three articles with 3, 1, and 2 sentences, and each sentence consists of a variable number of words:
Y
Yi Wang 已提交
68 69 70 71 72 73

```
3           1  2
3   2  4    1  2  3
||| || |||| |  || |||
```
Y
Yi Wang 已提交
74

Y
Yi Wang 已提交
75
### A Mini-Batch of Videos
Y
Yi Wang 已提交
76

M
Markus Kliegl 已提交
77
LoD tensors generalize to the case where elements are higher dimensional objects, like images.  Suppose that a mini-batch contains videos of the same frame size 640x480.  Here is a mini-batch of 3 videos with 3, 1, and 2 frames, respectively.
Y
Yi Wang 已提交
78 79 80 81 82 83

```
3     1  2
口口口 口 口口
```

Y
Yi Wang 已提交
84
The underlying tensor is of size (3+1+2)x640x480, and each `口` represents a 640x480 image.
Y
Yi Wang 已提交
85

Y
Yi Wang 已提交
86
### A Mini-Batch of Images
Y
Yi Wang 已提交
87

Y
Yi Wang 已提交
88
In traditional cases like a mini-batch with N fixed-sized images,  the LoD Tensor representation is as
Y
Yi Wang 已提交
89 90 91 92 93 94

```
1 1 1 1     1
口口口口 ... 口
```

M
Markus Kliegl 已提交
95
In this case, we don't lose any information by ignoring the many 1's in the index and simply considering this LoD Tensor as a usual Tensor:
Y
Yi Wang 已提交
96

Y
Yi Wang 已提交
97 98 99
```
口口口口 ... 口
```
Y
Yi Wang 已提交
100

Y
Yi Wang 已提交
101
### Model Parameters
Y
Yi Wang 已提交
102

Y
Yi Wang 已提交
103
A model parameter is just a usual Tensor, which, just like the above example, is a **0-level LoD Tensor**.
Y
Yi Wang 已提交
104

Y
Yi Wang 已提交
105

Y
Yi Wang 已提交
106
## The LoD Tensor
Y
Yi Wang 已提交
107

Y
Yi Wang 已提交
108
Let us revisit above example of the 2-level LoD Tensor
Y
Yi Wang 已提交
109 110 111 112 113 114 115

```
3           1  2
3   2  4    1  2  3
||| || |||| |  || |||
```

Y
Yi Wang 已提交
116 117 118 119 120
It is indeed a tree, where leaves are elementary sequences identified by **branches**.

For example, the third sentence in above example is identified by branch <0,2>, where 0 indicates the first article with length 3, and 2 indicates the third sentence in this article with length 4.

### The LoD Index
Y
Yi Wang 已提交
121

M
Markus Kliegl 已提交
122
We can save the LoD index in the above example
Y
Yi Wang 已提交
123 124

```
Y
Yi Wang 已提交
125 126
3           1  2
3   2  4    1  2  3
Y
Yi Wang 已提交
127 128
```

Y
Yi Wang 已提交
129
in a not-full 2D matrix:
Y
Yi Wang 已提交
130

Y
Yi Wang 已提交
131 132
```c++
typedef std::vector<std::vector<int> > LoD;
Y
Yi Wang 已提交
133 134
```

Y
Yi Wang 已提交
135 136 137 138 139 140 141 142 143 144
where

- `LoD.size()` is the number of levels, or the maximum length of branches,
- `LoD[i][j]` is the length of the j-th segment at the i-th level.

## The Offset Representation

To quickly access elementary sequences, we adopt an offset representation -- instead of saving the lengths, we save the beginning and ending elements of sequences.

In the above example, we accumulate the length of elementary sequences:
Y
Yi Wang 已提交
145 146

```
Y
Yi Wang 已提交
147
3 2 4 1 2 3
Y
Yi Wang 已提交
148 149
```

Y
Yi Wang 已提交
150
into offsets
Y
Yi Wang 已提交
151

Y
Yi Wang 已提交
152 153 154 155 156
```
0  3  5   9   10  12   15
   =  =   =   =   =    =
   3  2+3 4+5 1+9 2+10 3+12
```
Y
Yi Wang 已提交
157

X
Xiaohu Zhu 已提交
158
so we know that the first sentence is from word 0 to word 3, and the second sentence from word 3 to word 5.
Y
Yi Wang 已提交
159

M
Markus Kliegl 已提交
160
Similarly, the lengths in the top level LoD
Y
Yi Wang 已提交
161

Y
Yi Wang 已提交
162 163
```
3 1 2
164 165
```

M
Markus Kliegl 已提交
166
are transformed into offsets of elements/words as follows:
Y
Yi Wang 已提交
167

168
```
Y
Yang Yang(Tony) 已提交
169 170 171
0 3 4   6
  = =   =
  3 3+1 4+2
172
```
Y
Yi Wang 已提交
173

Y
Yi Wang 已提交
174 175
## Slicing of LoD Tensors

176

Y
Yi Wang 已提交
177
When we use the above 2-level LoD Tensor as the input to a nested-RNN, we need to retrieve certain sequences.  Here we define the sequence identified by branch <i,j,...> as the **<i,j,...>-slice**.
178

Y
Yi Wang 已提交
179
For example, the <2>-slice of above example is
180 181

```
Y
Yi Wang 已提交
182 183 184
10      15
10  12  15
  || |||
185 186
```

Y
Yi Wang 已提交
187
and the <2,0>-slice of above slice is
Y
Yi Wang 已提交
188

189
```
Y
Yi Wang 已提交
190 191
10  12
  ||
Y
Yi Wang 已提交
192
```
193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211

## Length Representation vs Offset Representation

The offset representation is an implementation-oriented decision and it makes understanding the idea behind LoDTensor difficult.
Hence, we encapsulate this implementation detail in C++ and expose the original length representation in our Python API. 
Specifically, we call this length representation `recursive_sequence_lengths` and users can use the following code to set or get the `recursive_sequence_lengths` of a LoDTensor in Python:
```Python
# length representation of lod called recursive_sequence_lengths
recursive_seq_lens = [[3, 1, 2], [2, 2, 1, 3, 1, 2]]
# Create a LoDTensor that has the above recursive_sequence_lengths info.
# This recursive_sequence_lengths will be converted to an offset representation of LoD in the C++ implementation under the hood.
tensor = fluid.LoDTensor(lod)

# Set/Change the recursive_sequence_lengths info of LoDTensor
tensor.set_recursive_sequence_lengths([[3, 1, 2]])
# Get the recursive_sequence_lengths info of a LoDTensor (the offset-based LoD representation stored in C++ will be converted 
# back to length-based recursive_sequence_lengths), new_recursive_seq_lens = [[3, 1, 2]]
new_recursive_seq_lens = tensor.recursive_sequence_lengths()
```