提交 b7ed0fa3 编写于 作者: Q qiaolongfei

update the design doc of distributed lookup table

上级 a0fefc27
# Design Doc: Distributed Lookup Table Operator # Design Doc: Distributed Lookup Table Operator
A lookup table operator in PaddlePaddle where the table could be out A distribute lookup table operator in PaddlePaddle where the table could be out
of the memory of a computer. of the memory of a computer.
## Background ## Background
...@@ -44,85 +44,40 @@ $$W = f(W, W')$$ ...@@ -44,85 +44,40 @@ $$W = f(W, W')$$
The following figure illustrates the backward pass of the lookup The following figure illustrates the backward pass of the lookup
operator: ![lookup table training](./src/lookup_table_training.png) operator: ![lookup table training](./src/lookup_table_training.png)
## Distributed Storage Service ## Distributed Lookup Table
### Problem 1: The lookup table may be very large.
The forward algorithm requires a distributed storage service for W.
The backward algorithm prefers that the storage system can apply the In condition like search engien and recommendation system, the number of feature ID may be very large, see 1000000000, then for a lookup table of size 8, the total size of the table is:
optimization algorithm on W. The following two sections describe two
solutions -- the former doesn't require that the storage service can ```
do optimization, the latter does. 100000000000 * 8 * 4.0 = 2980.23 GB
```
### Storage Service Doesn't Optimize
### Solution: Distributed storage
In this design, we use highly-optimized distributed storage, e.g.,
memcached, as the storage service, and we run the optimization 1. Paddle use SelectedRows as the storage format for the lookup table, the lookup table parameter will be splited to multi machine according to the hash of the feature ID, and data will also be splited and send to the same machine to prefetch the parameter.
algorithm on parameter servers of PaddlePaddle. The following figure
illustrates the training process. 1. For common parameters, trainer will get the whole parameter for training, but for the big lookup table, trainer can not store the whole parameter, but the input data feature is very sparse, so every time we only need a few parameter for training, so we use `prefetch_op` to only prefetch the parameter needed to trainer.
<!-- ### Problem 2. The Id in the lookup table is not sure before training.
Note: please update the following URL when update this digraph.
<img src='https://g.gravizo.com/svg? The feature Id is calculated by hash function, because the feature data source is so large, we can not get all the id before training. So we can not initialize the table before training.
digraph G {
rankdir="LR";
subgraph cluster1 { ### Solution: Id auto growth
P1 [label="pserver 1"];
P2 [label="pserver 2"]; At the beginning of training, paddle only malloc the memory for the lookup table at pserver side, the id and the data will not be initialized. During training, when a pserver recived a Id, if the is is already in the lookup table, it will return the exist parameter, if the id is not exist, paddle will add it into the lookup table and initialize the value for it.
T1 [label="trainer 1"];
T2 [label="trainer 2"];
T3 [label="trainer 3"]; ## Architecture
} The whole architecture of the distribute lookup table is as below:
KV [label="memcached"];
T1 -> P1; ### Training steps:
T1 -> P2; 1. Read a batch of data, the data is feature ids.
T2 -> P1; 1. The input ids will be splited by `split_ids_op` with the same hash function of the lookup table.
T2 -> P2; 1. The `prefetch_op` use the splited result to prefetch parameters back from lookup table.
T3 -> P1; 1. Run forward backward to get the the gradient of the lookup table.
T3 -> P2; 1. `split_ids_op` split the gradient and then use `send_op` to parameter server.
P1 -> KV [color=gray, weight=0.1]; 1. parameter server update the table with the received gradient.
KV -> P1 [color=gray, weight=0.1];
P2 -> KV [color=gray, weight=0.1]; ![distribute lookup table](./src/distributed_lookup_table.jpeg)
KV -> P2 [color=gray, weight=0.1];
KV -> T1 [color=gray, weight=0.1];
KV -> T2 [color=gray, weight=0.1];
KV -> T3 [color=gray, weight=0.1];
}
)
'/>
-->
<img src='https://g.gravizo.com/svg?%20digraph%20G%20{%20rankdir=%22LR%22;%20subgraph%20cluster1%20{%20P1%20[label=%22pserver%201%22];%20P2%20[label=%22pserver%202%22];%20T1%20[label=%22trainer%201%22];%20T2%20[label=%22trainer%202%22];%20T3%20[label=%22trainer%203%22];%20}%20KV%20[label=%22memcached%22];%20T1%20-%3E%20P1;%20T1%20-%3E%20P2;%20T2%20-%3E%20P1;%20T2%20-%3E%20P2;%20T3%20-%3E%20P1;%20T3%20-%3E%20P2;%20P1%20-%3E%20KV%20[color=gray,%20weight=0.1];%20KV%20-%3E%20P1%20[color=gray,%20weight=0.1];%20P2%20-%3E%20KV%20[color=gray,%20weight=0.1];%20KV%20-%3E%20P2%20[color=gray,%20weight=0.1];%20KV%20-%3E%20T1%20[color=gray,%20weight=0.1];%20KV%20-%3E%20T2%20[color=gray,%20weight=0.1];%20KV%20-%3E%20T3%20[color=gray,%20weight=0.1];%20}'/>
Each trainer runs the forward and backward passes using their local
data:
1. In the forward pass, when a trainer runs the forward algorithm of a
lookup operator, it retrieves W(x) from the storage service.
1. The trainer computes W'(x) in the backward pass using W(x).
During the global update process:
1. Each trainer uploads its W'(x) to parameter servers.
1. The parameter server runs the optimization algorithm, e.g., the
Adam optimization algorithm, which requires that
1. The parameter server retrieves W(x) from memcached, and
1. The parameter server pushes $\Delta W(x)=f(W(x), lambda \sum_j
W'(x))$ to memcached, where $f$ denotes the optimization
algorithm.
### Storage Service Does Optimize
This design is very similar to the above one, except that the
optimization algorithm $f$ runs on the storage service.
- Pro: parameter servers do not retrieve W(x) from the storage
service, thus saves half network communication.
- Con: the storage service needs to be able to run the optimization
algorithm.
## Conclusion
Let us do the "storage service does not optimize" solution first, as a
baseline at least, because it is easier to use a well-optimized
distributed storage service like memcached. We can do the "storage
service does optimize" solution later or at the same time, which, if
implemented carefully, should have better performance than the former.
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册