For the learning of variable length sequences, the existing mainstream frameworks such as tensorflow, pytorch, caffe2, mxnet and so on all use padding.
Different-length sequences in a mini-batch will be padded with zeros and transformed to same length.
The existing RNN implementations of the PaddlePaddle is `RecurrentLayerGroup`,
which supports the variable length sequences without padding.
This doc will design fluid's RNN based on this idea.
## Multi-layer sequence data format `LODTensor`
At present, Paddle stores data in one mini-batch in one-dimensional array.
`Argument.sequenceStartPositions` is used to store information for each sentence.
In Paddle, `Argument.subSequenceStartPositions` is used to store 2 levels of sequence information, while higher dimensional sequences can not be supported.
In order to support the storage of `N-level` sequences, we define sequence information as the following data structure.
Among them, `lod_start_pos_` uses `shared_ptr` to reduce the cost of storage and replication.
`LODTensor` can be thought as an extension of `Tensor`, which is almost completely compatible with the original `Tensor`.
## How to support the framework
### Replace `Tensor` with `LoDTensor`
To implement the passing of `LODTensor`, most `Tensor` in the framework need to be replaced with `LODTensor`.
Simple implementation, directly **replace all previous `Tensor` with `LODTensor`** , where you can directly modify the `Tensor` interface created in `pybind.cc`.
In addition, the user may need to perceive the existence of a sequence (such as the sequence of the visualization needs to parse the output sequence in the model), so some of the serial operation APIs also need to be exposed to the python layer.
### Transmit `lod_start_pos` along with the Op call chain
`lod_start_pos` is passed along with the Op call chain
The framework needs to support the following features to implement the transmit of `lod_start_pos`:
1. Implement the transfer as `shared_ptr`
- Do not modify the contents of `lod_start_pos` as a consumer
- Modify producer of `lod_start_pos` as producer
- Conventions consumer only needs to copy `shared_ptr` passed over
- producer needs to create its own independent memory to store its own independent modifications and expose `shared_ptr` to subsequent consumer
- Since the transfer process is implemented by copying `shared_ptr`, the framework only needs to pass `lod_start_pos` once.
2. Op is transparent enough not to sense `lod_start_pos`
3. Producer Op that needs to modify `lod_start_pos` can update its `lod_start_pos` data when `Run`
## sorted by length
After sorting by length, the batch size from the forward time step will naturally decrement, and you can directly plug it into Net to do the batch calculation.
For example, the original input:
```
origin:
xxxx
xx
xxx
-> sorted:
xxxx
xxx
xx
```
After `SegmentInputs`, there will be 4 time steps, the input of each time step is as follows (vertical arrangement)
```
0 1 2 3
x x x x
x x x
x x
```
In order to track the changes before and after sorting, use here
```c++
structSortedSeqItem{
void*start{nullptr};
void*end{nullptr};
};
std::vector<SortedSeqItem>sorted_seqs;
```
To track the position of the sequence after sorting, and add a new interface
Due to the sequence of input sequences, the following existing interfaces need to be modified:
- InitMemories, memory needs to be rearranged according to `sorted_seqs`
- SetmentInputs
- ConcatOutputs
In addition, because `sorted_seqs` needs to be multiplexed with `RecurrentGradientOp`, it will become a new output of `RecurrentOp`.
It is passed in as an input to `RecurrentGradientOp`.
## InitMemories
Due to the sequence change, the order of the elements on the `boot_memories` batch also needs to be rearranged accordingly.
## SegmentInputs
`SegmentInputs` relies on the information of `sorted_seqs` to cut the original sequence from the horizontal to the input of each step in the sorted sequence order.
the transition is as follows:
```
origin:
xxxx
xx
xxx
|
|
\ /
!
0 1 2 3
x x x x
x x x
x x
```
## ConcatOutputs
`ConcatOutputs` needs
- Restore the output of each time step back to the original input sequence order (to prevent the order of Infer phase from being upset)
- Concat each sequence as a regular mini-batch representation
## references
1.[Level of details](https://en.wikipedia.org/wiki/Level_of_detail)
@@ -6,6 +6,7 @@ PaddlePaddle adheres to the following three sections of code and document specif
PaddlePaddle uses git for version control and Docker is used for building and testing environment. The code includes Cuda, C++, Python, Shell and other programming languages,which comply with Google C++ Style, Pep-8, and the code base includes style checking by an automatic inspection tool. Code comments need to follow the Doxygen specification. The code that does not meet the style requirements will fail to compile. We provide the following guidelines for the use of Git, build tests and code development.
The training procedure of neural networks demands dozens of gigabytes of host memory or serval gigabytes of device memory, which is a rather memory consuming work. The memory consumed by PaddlePaddle framework mainly includes:
\:
* Cache memory for DataProvider (only on host memory),
* Memory for neurons' activation information (on both host memory and device memory),
* Memory for parameters (on both host memory and device memory),
* Other memory demands.
Other memory demands is mainly used to support the running demand of PaddlePaddle framework itself, such as string allocation,temporary variables, which are not considered currently.
Reduce DataProvider Cache Memory
++++++++++++++++++++++++++
PyDataProvider works under asynchronous mechanism, it loads together with the data fetch and shuffle procedure in host memory:
.. graphviz::
digraph {
rankdir=LR;
Data Files -> Host Memory Pool -> PaddlePaddle Training
}
Thus the reduction of the DataProvider cache memory can reduce memory occupancy, meanwhile speed up the data loading procedure before training. However, the size of the memory pool can actually affect the granularity of shuffle,which means a shuffle operation is needed before each data file reading process to ensure the randomness of data when try to reduce the size of the memory pool.
.. literalinclude:: src/reduce_min_pool_size.py
In this way, the memory consumption can be significantly reduced and hence the training procedure can be accelerated. More details are demonstrated in :ref:`api_pydataprovider2`.
The Neurons Activation Memory
++++++++++++++
Each neuron activation operating in a neural network training process contains certain amount of temporary data such as the activation data (like the output value of a neuron). These data will be used to update parameters in back propagation period. The scale of memory consumed by these data is mainly related with two parameters, which are batch size and the length of each Sequence. Therefore, the neurons activation memory consuming is actually in proportion to the information contains in each mini-batch training.
Two practical ways:
* Reduce batch size. Set a smaller value in network configuration settings(batch_size=1000) can be helpful. But setting batch size to a smaller value may affect the training result due to it is a super parameter of the neural network itself.
* Shorten the sequence length or cut off those excessively long sequences. For example, if the length of sequences in a dataset are mostly varies between 100 and 200, but there is sequence lengthen out to 10,000, then it’s quite potentially leads to OOM (out of memory), especially in RNN models such as LSTM.
The Parameters Memory
++++++++
The PaddlePaddle framework supports almost all popular optimizers. Different optimizers have different memory requirement. For example, the :code:`adadelta` consumes approximately 5 times memory
space than the weights parameter’s scale, which means the :code:`adadelta` needs at least :code:`500M` memory if the model file contains all
parameters needs :code:`100M`.
Some optimization algorithms such as :code:`momentum` are worth giving a shot.
2. Tricks To Speed Up Training
-------------------
The training procedure of PaddlePaddle may be speed up when considering following aspects:\:
* Reduce the time consumption of data loading
* Speed up training epochs
* Introduce more computing resources with the utilization of distribute training frameworks
Reduce The Time Consumption of Data Loading
++++++++++++++++++
The \ :code:`pydataprovider`\ holds big potential to speed up the data loading procedure if the cache pool and enable memory cache when use it. The principle of the reduction of :code:`DataProvider` cache pool is basically the same with the method which reduct the memory occupation with the set of a smaller cache pool.
.. literalinclude:: src/reduce_min_pool_size.py
Beside, the interface :code:`@provider` provides a parameter :code:`cache` to control cache. If set it to :code:`CacheType.CACHE_PASS_IN_MEM`, the data after the first :code:`pass` ( a pass means all data have be fed into the network for training) will be cached in memory and no new data will be read from the :code:`python` side in following :code:`pass` , instead from the cached data in memory. This strategy can also drop the time consuming in data loading process.
Accelerating Training Epochs
++++++++++++
Sparse training is supported in PaddlePaddle. The features needs to be trained is any of :code:`sparse_binary_vector`, :code:`sparse_vector` and :code:`integer_value` . Meanwhile, the Layer interacts with the training data need to turn the Parameter to sparse updating mode by setting :code:`sparse_update=True`.
Take :code:`word2vec` as an example, to train a language distance, one needs to predict the middle word with two words prior to it and next to it. The DataProvider of this task is:
.. literalinclude:: src/word2vec_dataprovider.py
The configuration of this task is:
.. literalinclude:: src/word2vec_config.py
Introduce More Computing Resources
++++++++++++++++++
More computing resources can be introduced with following manners:
* Single CPU platform training
* Use multi-threading by set :code:`trainer_count`。
* Single GPU platform training
* Set :code:`use_gpu` to train on single GPU.
* Set :code:`use_gpu` and :code:`trainer_count` to enable multiple GPU training support.
* Cluster Training
* Refer to :ref:`cluster_train` 。
3. Assign GPU Devices
------------------
Assume a computing platform consists of 4 GPUs which serial number from 0 to 3:
* Method1: specify a GPU as computing device by set:
Paddle binary catches floating exceptions during runtime, it will be terminated when NaN or Inf occurs. Floating exceptions are mostly caused by float overflow, divide by zero. There are three main reasons may raise such exception:
* Parameters or gradients during training are oversize, which leads to float overflow during calculation.
* The model failed to converge and diverges to a big value.
* Parameters may converge to a singular value due to bad training data. If the scale of input data is too big and contains millions of parameter values, float overflow error may arise when operating matrix multiplication.
Details can refer to example `nmt_without_attention <https://github.com/PaddlePaddle/models/blob/develop/nmt_without_attention/train.py#L35>`_ 示例。
2. Set :code:`error_clipping_threshold` as:
.. code-block:: python
decoder_inputs = paddle.layer.fc(
act=paddle.activation.Linear(),
size=decoder_size * 3,
bias_attr=False,
input=[context, current_word],
layer_attr=paddle.attr.ExtraLayerAttribute(
error_clipping_threshold=100.0))
Details can refer to example `machine translation <https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/train.py#L66>`_ 。
The main difference between these two methods are:
1. They both block the gradient, but happen in different occasions,the former one happens when then :code:`optimzier` updates the network parameters while the latter happens when the back propagation computing of activation functions.
2. The block target are different, the former blocks the trainable parameters’ gradient while the later blocks the gradient to be propagated to prior layers.
Moreover, Such problems may be fixed with smaller learning rates or data normalization.
5. Fetch Multi Layers’ Prediction Result With Infer Interface
-----------------------------------------------
* Join the layer to be used as :code:`output_layer` layer to the input parameters of :code:`paddle.inference.Inference()` interface with:
* Assign certain fields to output. Take :code:`value` as example, it can be down with following code:
.. code-block:: python
out = inferer.infer(input=data_batch, field=["value"])
It is important to note that:
* If 2 layers are assigned as output layer, then the output results consists of 2 matrixes.
* Assume the output of first layer A is a matrix sizes N1 * M1, the output of second layer B is a matrix sizes N2 * M2;
* By default, paddle.v2 will transverse join A and B, when N1 not equal to N2, it will raise following error:
.. code-block:: python
ValueError: all the input array dimensions except for the concatenation axis must match exactly
The transverse of different matrixes of multi layers mainly happens when:
* Output sequence layer and non sequence layer;
* Multiple output layers process multiple sequence with different length;
Such issue can be avoided by calling infer interface and set :code:`flatten_result=False`. Thus, the infer interface returns a python list, in which
* The number of elements equals to the number of output layers in the network;
* Each element in list is a result matrix of a layer, which type is numpy.ndarray;
* The height of each matrix outputted by each layer equals to the number of samples under non sequential mode or equals to the number of elements in the input sequence under sequential mode. Their width are both equal to the layer size in configuration.
6. Fetch the Output of A Certain Layer During Training
-----------------------------------------------
In event_handler, the interface :code:`event.gm.getLayerOutputs("layer_name")` gives the forward output value organized in :code:`numpy.ndarray` corresponding to :code:`layer_name` in the mini-batch.
The output can be used in custom measurements in following way:
Note: this function can not get content of :code:`paddle.layer.recurrent_group` step, but output of :code:`paddle.layer.recurrent_group` can be fetched.
7. Fetch Parameters’ Weight and Gradient During Training
-----------------------------------------------
Under certain situations, knowing the weights of currently training mini-batch can provide more inceptions of many problems. Their value can be acquired by printing values in :code:`event_handler` (note that to gain such parameters when training on GPU, you should set :code:`paddle.event.EndForwardBackward`). Detailed code is as following:
.. code-block:: python
...
parameters = paddle.parameters.create(cost)
...
def event_handler(event):
if isinstance(event, paddle.event.EndForwardBackward):
if event.batch_id % 25 == 0:
for p in parameters.keys():
logger.info("Param %s, Grad %s",
parameters.get(p), parameters.get_grad(p))
Note that “acquire the output of a certain layer during training” or “acquire the weights and gradients of parameters during training ” both needs to copy training data from C++ environment to numpy, which have certain degree of influence on training performance. Don’t use these two functions when the training procedure cares about the performance.
Recurrent neural networks(RNN) are an important tool to model sequential data. PaddlePaddle provides flexible interface for building complex recurrent neural network. We will demonstrate how to use PaddlePaddle to build RNN models in the following 4 parts.
In the first part, we will guide you how to configure recurrent neural network in PaddlePaddle from simple to complex. First, we will use a vanilla recurrent neural network as an example to show how to configure recurrent neural network architecture. Then We will use the sequence to sequence model as an example to demonstrate how you can configure complex recurrent neural network models gradually.
.. toctree::
:maxdepth: 1
rnn_config_en.rst
Recurrent Group is the key unit to build complex recurrent neural network models. The second part describes related concepts and Basic principles of Recurrent Group, and give a detailed description of Recurrent Group API interface. In addition, it also introduces Sequence-level RNN(hierarchical sequence as input) and the usage of Recurrent Group in it.
.. toctree::
:maxdepth: 1
recurrent_group_en.md
In the third part, two-level sequence is demonstrated briefly and then layers supporting two-level sequence as input are listed and described respectively.
.. toctree::
:maxdepth: 1
hierarchical_layer_en.rst
In the last part, the unit test of hierarchical RNN is presented as an example to explain how to use hierarchical RNN. We will use two-level sequence RNN and single-layer sequence RNN which have same effects with former as the network configuration seperately in unit test.
The RecordIO file format is a container for records. This package is a C++ implementation of https://github.com/paddlepaddle/recordio, which originates from https://github.com/wangkuiyi/recordio.
## Fault-tolerant Writing
For the initial design purpose of RecordIO within Google, which was logging, RecordIO groups record into *chunks*, whose header contains an MD5 hash of the chunk. A process that writes logs is supposed to call the Writer interface to add records. Once the writer accumulates a handful of them, it groups a chunk, put the MD5 into the chunk header, and appends the chunk to the file. In the event the process crashes unexpected, the last chunk in the RecordIO file could be incomplete/corrupt. The RecordIO reader is able to recover from these errors when the process restarts by identifying incomplete chucks and skipping over them.
## Reading Ranges
A side-effect of chunks is to make it easy to indexing records while reading, thus allows us to read a range of successive records. This is good for distributed log process, where each MapReduce task handles only part of records in a big RecordIO file.
The procedure that creates the index starts from reading the header of the first chunk. It indexes the offset (0) and the size of the chunk, and skips to the header of the next chunk by calling the `fseek` API. Please be aware that most distributed filesystems and all POSIX-compatible local filesystem provides `fseek`, and makes sure that `fseek` runs much faster than `fread`. This procedure generates a map from chunks to their offsets, which allows the readers is to locate and read a range of records.