### Problem 1: The lookup table may be very large.
The forward algorithm requires a distributed storage service for W.
The backward algorithm prefers that the storage system can apply the
In the condition like the search engine and recommendation system, the number of feature Id may be very large, say 100,000,000,000, then for a float value lookup table of size 8, the total size of the table is:
optimization algorithm on W. The following two sections describe two
solutions -- the former doesn't require that the storage service can
```
do optimization, the latter does.
100,000,000,000 * 8 * 4(Bytes) = 2980.23 GB
```
### Storage Service Doesn't Optimize
### Solution: Distributed storage
In this design, we use highly-optimized distributed storage, e.g.,
memcached, as the storage service, and we run the optimization
1. Paddle use [SelectedRows](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/modules/selected_rows.md) as the storage format for the lookup table, the lookup table parameter will be split to multi-machine according to the hash of the feature ID, and data will also be split and send to the same machine to prefetch the parameter.
algorithm on parameter servers of PaddlePaddle. The following figure
illustrates the training process.
1. For common parameters, the trainer will get the whole parameter for training, but for the big lookup table, the trainer can not store the whole parameter. Because the input data feature is very sparse, every time we only need a few parameters for training, so we use `prefetch_op` to only prefetch the parameter needed to trainer.
<!--
### Problem 2. The Id in the lookup table is not sure before training.
Note: please update the following URL when update this digraph.
<img src='https://g.gravizo.com/svg?
The feature Id is calculated by the hash function because the feature data source is so large, we can not get all the Id before training. So we can not initialize the table before training.
digraph G {
rankdir="LR";
### Solution: Id auto growth
subgraph cluster1 {
P1 [label="pserver 1"];
At the beginning of training, paddle only malloc the memory for the lookup table at parameter server side, the Id and it's value will not be initialized. During training, when a parameter server received an Id, if it is already in the lookup table, it will return the existing parameter, if the Id does not exist, paddle will add it into the lookup table and initialize the value for it.
P2 [label="pserver 2"];
T1 [label="trainer 1"];
### Problem 3: parameter load and save
T2 [label="trainer 2"];
T3 [label="trainer 3"];
For common parameters, paddle use trainer to save and load them. But for distributed lookup table, trainer cannot do this because it's large size.
}
KV [label="memcached"];
### Solution: Parameter server side save and load
T1 -> P1;
T1 -> P2;
Paddle support parameter server side save and load for distribute lookup table. Each machine of parameter servers will only save and load part of the whole table.
T2 -> P1;
T2 -> P2;
## Architecture
T3 -> P1;
The whole architecture of the distribute lookup table is as below:
T3 -> P2;
P1 -> KV [color=gray, weight=0.1];
### Training steps:
KV -> P1 [color=gray, weight=0.1];
1. Read a batch of data, the data is feature ids.
P2 -> KV [color=gray, weight=0.1];
1. The input ids will be split by `split_ids_op` with the same hash function of the lookup table.
KV -> P2 [color=gray, weight=0.1];
1. The `prefetch_op` use the split result to prefetch parameters back from the lookup table.
KV -> T1 [color=gray, weight=0.1];
1. Run forward-backward to get the gradient of the lookup table.
KV -> T2 [color=gray, weight=0.1];
1.`split_ids_op` split the gradient and then use `send_op` to the parameter server.
KV -> T3 [color=gray, weight=0.1];
1. parameter server update the table with the received gradient.