lod_tensor.md 5.4 KB
Newer Older
Y
Yi Wang 已提交
1
# Design Doc: LoD (Level-of-Detail) Tensor
Y
Yi Wang 已提交
2

Y
Yi Wang 已提交
3
As other deep learning systems, PaddlePaddle supports training models from sequence data.  Also, like other systems, PaddlePaddle represent a mini-batch of sequences as a Tensor.  What is different is that PaddlePaddle doesn't require that all sequences in a mini-batch are of the same length. Thus no need for padding zeros.
Y
Yi Wang 已提交
4

Y
Yi Wang 已提交
5 6 7 8 9 10
|                       | TensorFlow | PaddlePaddle |
|-----------------------|------------|--------------|
| RNN                   | Support    | Support      |
| recursive RNN         | Support    | Support      |
| padding zeros         | Must       | No need      |
| blob data type        | Tensor     | LoDTensor    |
Y
Yi Wang 已提交
11

Y
Yi Wang 已提交
12
PaddlePaddle achieves this flexibility by passing through a new data type, *LoD Tensor*, which is a Tensor attached with segmentation index known as *LoD*, between operators.  The LoD index doesn't only segments a tensor, but also recursively segments sub-sequences.  This document presents the design of LoD and LoDTensor.
Y
Yi Wang 已提交
13 14


Y
Yi Wang 已提交
15
## The Challenge: Variable-length Sequences
Y
Yi Wang 已提交
16

Y
Yi Wang 已提交
17
Most deep learning systems represent a mini-batch as a Tensor.  For example, a mini-batch of 10 images, each of size 32x32, is a 10x32x32 Tensor.  Another example is that each mini-batch contains N sentences, where each word is a D-dimensional one-hot vector.  Suppose that all sentences have the same length L, we can represent this mini-batch by a NxLxD tensor.
Y
Yi Wang 已提交
18

Y
Yi Wang 已提交
19 20 21 22 23 24 25 26 27 28 29
Both examples show that the elements of sequences are usually of the same size.  In the first example, all images are 32x32, and in the second one, all words are D-dimensional vectors.  It doesn't make sense to allow variable-sized images, as that would require transformations like convolution represented by variable-sized Tensors.

The real challenge is that in most cases, sentences have variable lengths, and we will need an index data structure to segment the tensor into sequences.  Also, sequences might consist of sub-sequences.

## A Solution: The LoD Index

Let is visit this challenge from examples.

### A Mini-Batch of Sentences

Let's imagine a mini-batch of 3 variable lengths sentences composed by 3, 1, and 2 words respectively.  We can represent it by a (3+1+2)xD tensor plus some index information:
Y
Yi Wang 已提交
30 31 32 33 34 35

```
3   1 2
||| | ||
```

Y
Yi Wang 已提交
36 37 38 39 40 41 42 43 44 45 46
where each `|` represents a D-dimensional word vector.  The numbers, 3, 1, and 2, form a 1-level LoD.

### Recursive Sequences

Let check another example of a 2-level LoD Tensor.  Consider a mini-batch of three articles with 3, 1, and 2 sentences, and each sentence consists of words:

```
3           1  2
3   2  4    1  2  3
||| || |||| |  || |||
```
Y
Yi Wang 已提交
47

Y
Yi Wang 已提交
48
### A Mini-Batch of Videos
Y
Yi Wang 已提交
49

Y
Yi Wang 已提交
50
LoD Tensor generalizes to the case where elements are higher dimensional objects, like images.  Suppose that a mini-batch contains videos of the same frame size 640x480.  Here is a mini-batch of 3 videos with 3, 1, and 2 frames respectively.
Y
Yi Wang 已提交
51 52 53 54 55 56

```
3     1  2
口口口 口 口口
```

Y
Yi Wang 已提交
57
The underlying tensor is of size (3+1+2)x640x480, and each `口` represents a 640x480 image.
Y
Yi Wang 已提交
58

Y
Yi Wang 已提交
59
### A Mini-Batch of Images
Y
Yi Wang 已提交
60

Y
Yi Wang 已提交
61
In traditional cases like a mini-batch with N fixed-sized images,  the LoD Tensor representation is as
Y
Yi Wang 已提交
62 63 64 65 66 67

```
1 1 1 1     1
口口口口 ... 口
```

Y
Yi Wang 已提交
68
It doesn't loss anything to ignore the many 1's in the index and to consider this LoD Tensor a usual Tensor:
Y
Yi Wang 已提交
69

Y
Yi Wang 已提交
70 71 72
```
口口口口 ... 口
```
Y
Yi Wang 已提交
73

Y
Yi Wang 已提交
74
### Model Parameters
Y
Yi Wang 已提交
75

Y
Yi Wang 已提交
76
A model parameter is just a usual Tensor, which, just like the above example, is a **0-level LoD Tensor**.
Y
Yi Wang 已提交
77

Y
Yi Wang 已提交
78
## The LoD Tensor
Y
Yi Wang 已提交
79

Y
Yi Wang 已提交
80
Let us revisit above example of the 2-level LoD Tensor
Y
Yi Wang 已提交
81 82 83 84 85 86 87

```
3           1  2
3   2  4    1  2  3
||| || |||| |  || |||
```

Y
Yi Wang 已提交
88 89 90 91 92
It is indeed a tree, where leaves are elementary sequences identified by **branches**.

For example, the third sentence in above example is identified by branch <0,2>, where 0 indicates the first article with length 3, and 2 indicates the third sentence in this article with length 4.

### The LoD Index
Y
Yi Wang 已提交
93

Y
Yi Wang 已提交
94
We can save the LoD index in above example
Y
Yi Wang 已提交
95 96

```
Y
Yi Wang 已提交
97 98
3           1  2
3   2  4    1  2  3
Y
Yi Wang 已提交
99 100
```

Y
Yi Wang 已提交
101
in a not-full 2D matrix:
Y
Yi Wang 已提交
102

Y
Yi Wang 已提交
103 104
```c++
typedef std::vector<std::vector<int> > LoD;
Y
Yi Wang 已提交
105 106
```

Y
Yi Wang 已提交
107 108 109 110 111 112 113 114 115 116
where

- `LoD.size()` is the number of levels, or the maximum length of branches,
- `LoD[i][j]` is the length of the j-th segment at the i-th level.

## The Offset Representation

To quickly access elementary sequences, we adopt an offset representation -- instead of saving the lengths, we save the beginning and ending elements of sequences.

In the above example, we accumulate the length of elementary sequences:
Y
Yi Wang 已提交
117 118

```
Y
Yi Wang 已提交
119
3 2 4 1 2 3
Y
Yi Wang 已提交
120 121
```

Y
Yi Wang 已提交
122
into offsets
Y
Yi Wang 已提交
123

Y
Yi Wang 已提交
124 125 126 127 128
```
0  3  5   9   10  12   15
   =  =   =   =   =    =
   3  2+3 4+5 1+9 2+10 3+12
```
Y
Yi Wang 已提交
129

Y
Yi Wang 已提交
130
so we know that the first sentence is from word 0 to word 3, and the second sentence from work 3 to word 5.
Y
Yi Wang 已提交
131

Y
Yi Wang 已提交
132
Similarly, lengths in the top level LoD
Y
Yi Wang 已提交
133

Y
Yi Wang 已提交
134 135
```
3 1 2
136 137
```

Y
Yi Wang 已提交
138
is transformed into offsets of elements/words:
Y
Yi Wang 已提交
139

140
```
Y
Yi Wang 已提交
141 142 143
0 9     10  15
  =     =   =
  3+2+4 1+9 2+3+10
Y
Yi Wang 已提交
144 145
```

Y
Yi Wang 已提交
146 147 148
so we can tell that the first article is from word 0 to word 9, and the second article is from word 9 to word 10.

The complete offset representation is as follows:
Y
Yi Wang 已提交
149

150
```
Y
Yi Wang 已提交
151 152 153
0          9 10      15
0  3  5    9 10  12  15
||| || |||| |  || |||
154
```
Y
Yi Wang 已提交
155

Y
Yi Wang 已提交
156 157 158
## Slicing of LoD Tensors

When we use the above 2-level LoD Tensor as the input to a nested-RNN, we need to retrieve certain sequences.  Here we define the sequence identified by branch <i,j,...> as the **<i,j,...>-slice**.
159

Y
Yi Wang 已提交
160
For example, the <2>-slice of above example is
161 162

```
Y
Yi Wang 已提交
163 164 165
10      15
10  12  15
  || |||
166 167
```

Y
Yi Wang 已提交
168
and the <2,0>-slice of above slice is
Y
Yi Wang 已提交
169

170
```
Y
Yi Wang 已提交
171 172
10  12
  ||
Y
Yi Wang 已提交
173
```