sequence_decoder.md.txt 10.8 KB
Newer Older
1
# Design: Sequence Decoder Generating LoDTensors
2 3
In tasks such as machine translation and visual captioning,
a [sequence decoder](https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.md) is necessary to generate sequences, one word at a time.
4 5 6 7

This documentation describes how to implement the sequence decoder as an operator.

## Beam Search based Decoder
8
The [beam search algorithm](https://en.wikipedia.org/wiki/Beam_search) is necessary when generating sequences. It is a heuristic search algorithm that explores the paths by expanding the most promising node in a limited set.
9

10
In the old version of PaddlePaddle, the C++ class `RecurrentGradientMachine` implements the general sequence decoder based on beam search, due to the complexity involved, the implementation relies on a lot of special data structures that are quite trivial and hard to be customized by users.
11

12
There are a lot of heuristic tricks in the sequence generation tasks, so the flexibility of sequence decoder is very important to users.
13

14
During the refactoring of PaddlePaddle, some new concepts are proposed such as:  [LoDTensor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/lod_tensor.md) and [TensorArray](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/tensor_array.md) that can better support the sequence usage, and they can also help make the implementation of beam search based sequence decoder **more transparent and modular** .
15

16
For example, the RNN states, candidates IDs and probabilities of beam search can be represented all as `LoDTensors`;
17 18 19
the selected candidate's IDs in each time step can be stored in a `TensorArray`, and `Packed` to the sentences translated.

## Changing LoD's absolute offset to relative offsets
20
The current `LoDTensor` is designed to store levels of variable-length sequences. It stores several arrays of integers where each represents a level.
21

22 23
The integers in each level represent the begin and end (not inclusive) offset of a sequence **in the underlying tensor**,
let's call this format the **absolute-offset LoD** for clarity.
24

25
The absolute-offset LoD can retrieve any sequence very quickly but fails to represent empty sequences, for example, a two-level LoD is as follows
26 27 28 29 30 31 32 33 34 35 36
```python
[[0, 3, 9]
 [0, 2, 3, 3, 3, 9]]
```
The first level tells that there are two sequences:
- the first's offset is `[0, 3)`
- the second's offset is `[3, 9)`

while on the second level, there are several empty sequences that both begin and end at `3`.
It is impossible to tell how many empty second-level sequences exist in the first-level sequences.

37
There are many scenarios that rely on empty sequence representation, for example in machine translation or visual captioning, one instance has no translation or the empty candidate set for a prefix.
38

39
So let's introduce another format of LoD,
40 41 42 43 44 45 46 47 48
it stores **the offsets of the lower level sequences** and is called **relative-offset** LoD.

For example, to represent the same sequences of the above data

```python
[[0, 3, 6]
 [0, 2, 3, 3, 3, 9]]
```

49
the first level represents that there are two sequences,
50 51 52 53 54
their offsets in the second-level LoD is `[0, 3)` and `[3, 5)`.

The second level is the same with the relative offset example because the lower level is a tensor.
It is easy to find out the second sequence in the first-level LoD has two empty sequences.

55
The following examples are based on relative-offset LoD.
56 57

## Usage in a simple machine translation model
58
Let's start from a simple machine translation model that is simplified from the [machine translation chapter](https://github.com/PaddlePaddle/book/tree/develop/08.machine_translation) to draw a blueprint of what a sequence decoder can do and how to use it.
59

60
The model has an encoder that learns the semantic vector from a sequence, and a decoder which uses the sequence encoder to generate new sentences.
61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110

**Encoder**
```python
import paddle as pd

dict_size = 8000
source_dict_size = dict_size
target_dict_size = dict_size
word_vector_dim = 128
encoder_dim = 128
decoder_dim = 128
beam_size = 5
max_length = 120

# encoder
src_word_id = pd.data(
    name='source_language_word',
    type=pd.data.integer_value_sequence(source_dict_dim))
src_embedding = pd.embedding(size=source_dict_size, size=word_vector_dim)

src_word_vec = pd.lookup(src_embedding, src_word_id)

encoder_out_seq = pd.gru(input=src_word_vec, size=encoder_dim)

encoder_ctx = pd.last_seq(encoder_out_seq)
# encoder_ctx_proj is the learned semantic vector
encoder_ctx_proj = pd.fc(
    encoder_ctx, size=decoder_dim, act=pd.activation.Tanh(), bias=None)
```

**Decoder**

```python
def generate():
    decoder = pd.while_loop()
    with decoder.step():
        decoder_mem = decoder.memory(init=encoder_ctx)  # mark the memory
        generated_ids = decoder.memory() # TODO init to batch_size <s>s
        generated_scores = decoder.memory() # TODO init to batch_size 1s or 0s

        target_word = pd.lookup(trg_embedding, gendrated_ids)
        # expand encoder_ctx's batch to fit target_word's lod
        # for example
        # decoder_mem.lod is
        # [[0 1 3],
        #  [0 1 3 6]]
        # its tensor content is [a1 a2 a3 a4 a5]
        # which means there are 2 sentences to translate
        #   - the first sentence has 1 translation prefixes, the offsets are [0, 1)
        #   - the second sentence has 2 translation prefixes, the offsets are [1, 3) and [3, 6)
111
        # the target_word.lod is
112 113 114 115 116 117 118 119 120 121
        # [[0, 1, 6]
        #  [0, 2, 4, 7, 9 12]]
        # which means 2 sentences to translate, each has 1 and 5 prefixes
        # the first prefix has 2 candidates
        # the following has 2, 3, 2, 3 candidates
        # the encoder_ctx_expanded's content will be
        # [a1 a1 a2 a2 a3 a3 a3 a4 a4 a5 a5 a5]
        encoder_ctx_expanded = pd.lod_expand(encoder_ctx, target_word)
        decoder_input = pd.fc(
            act=pd.activation.Linear(),
122
            input=[target_word, encoder_ctx_expanded],
123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147
            size=3 * decoder_dim)
        gru_out, cur_mem = pd.gru_step(
            decoder_input, mem=decoder_mem, size=decoder_dim)
        scores = pd.fc(
            gru_out,
            size=trg_dic_size,
            bias=None,
            act=pd.activation.Softmax())
        # K is an config
        topk_scores, topk_ids = pd.top_k(scores, K)
        topk_generated_scores = pd.add_scalar(topk_scores, generated_scores)

        selected_ids, selected_generation_scores = decoder.beam_search(
            topk_ids, topk_generated_scores)

        # update the states
        decoder_mem.update(cur_mem)  # tells how to update state
        generated_ids.update(selected_ids)
        generated_scores.update(selected_generation_scores)

        decoder.output(selected_ids)
        decoder.output(selected_generation_scores)

translation_ids, translation_scores = decoder()
```
148 149
The `decoder.beam_search` is an operator that, given the candidates and the scores of translations including the candidates,
returns the result of the beam search algorithm.
150

151
In this way, users can customize anything on the input or output of beam search, for example:
152

153 154 155
1. Make the corresponding elements in `topk_generated_scores` zero or some small values, beam_search will discard this candidate.
2. Remove some specific candidate in `selected_ids`.
3. Get the final `translation_ids`, remove the translation sequence in it.
156

157 158
The implementation of sequence decoder can reuse the C++ class:  [RNNAlgorithm](https://github.com/Superjom/Paddle/blob/68cac3c0f8451fe62a4cdf156747d6dc0ee000b3/paddle/operators/dynamic_recurrent_op.h#L30),
so the python syntax is quite similar to that of an  [RNN](https://github.com/Superjom/Paddle/blob/68cac3c0f8451fe62a4cdf156747d6dc0ee000b3/doc/design/block.md#blocks-with-for-and-rnnop).
159

160
Both of them are two-level `LoDTensors`:
161

162 163
- The first level represents `batch_size` of (source) sentences.
- The second level represents the candidate ID sets for translation prefix.
164

165
For example, 3 source sentences to translate, and has 2, 3, 1 candidates.
166

167
Unlike an RNN, in sequence decoder, the previous state and the current state have different LoD and shape, and an `lod_expand` operator is used to expand the LoD of the previous state to fit the current state.
168

169
For example, the previous state:
170 171 172 173

* LoD is `[0, 1, 3][0, 2, 5, 6]`
* content of tensor is `a1 a2 b1 b2 b3 c1`

174
the current state is stored in `encoder_ctx_expanded`:
175 176

* LoD is `[0, 2, 7][0 3 5 8 9 11 11]`
177
* the content is
178 179 180 181 182 183 184
  - a1 a1 a1 (a1 has 3 candidates, so the state should be copied 3 times for each candidates)
  - a2 a2
  - b1 b1 b1
  - b2
  - b3 b3
  - None (c1 has 0 candidates, so c1 is dropped)

185
The benefit from the relative offset LoD is that the empty candidate set can be represented naturally.
186

187
The status in each time step can be stored in `TensorArray`, and `Pack`ed to a final LoDTensor. The corresponding syntax is:
188 189 190 191 192 193

```python
decoder.output(selected_ids)
decoder.output(selected_generation_scores)
```

194
The `selected_ids` are the candidate ids for the prefixes, and will be `Packed` by `TensorArray` to a two-level `LoDTensor`, where the first level represents the source sequences and the second level represents generated sequences.
195

196
Packing the `selected_scores` will get a `LoDTensor` that stores scores of each translation candidate.
197

198
Packing the `selected_generation_scores` will get a `LoDTensor`, and each tail is the probability of the translation.
199 200 201 202 203 204

## LoD and shape changes during decoding
<p align="center">
  <img src="./images/LOD-and-shape-changes-during-decoding.jpg"/>
</p>

205
According to the image above, the only phase that changes the LoD is beam search.
206 207

## Beam search design
208
The beam search algorithm will be implemented as one method of the sequence decoder and has 3 inputs:
209

210
1. `topk_ids`, the top K candidate ids for each prefix.
211 212 213
2. `topk_scores`, the corresponding scores for `topk_ids`
3. `generated_scores`, the score of the prefixes.

214
All of these are LoDTensors, so that the sequence affiliation is clear. Beam search will keep a beam for each prefix and select a smaller candidate set for each prefix.
215

216
It will return three variables:
217 218 219

1. `selected_ids`, the final candidate beam search function selected for the next step.
2. `selected_scores`, the scores for the candidates.
220
3. `generated_scores`, the updated scores for each prefix (with the new candidates appended).
221 222

## Introducing the LoD-based `Pack` and `Unpack` methods in `TensorArray`
223
The `selected_ids`, `selected_scores` and `generated_scores` are LoDTensors that exist at each time step,
224 225
so it is natural to store them in arrays.

226
Currently, PaddlePaddle has a module called `TensorArray` which can store an array of tensors. It is better to store the results of beam search in a `TensorArray`.
227

228 229
The `Pack` and `UnPack` in `TensorArray` are used to pack tensors in the array to an `LoDTensor` or split the `LoDTensor` to an array of tensors.
It needs some extensions to support the packing or unpacking an array of `LoDTensors`.